Notifications
Clear all
[Sticky] Devops Interview TOP 1000 Questions
Devops
1
Posts
1
Users
0
Likes
349
Views
Topic starter
22/11/2023 6:16 pm
1.What is DevOps?
You can answer it by describing what DevOps means to you and/or rely on how companies define it. I've put here a couple of examples.
Amazon:
"DevOps is the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity: evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes. This speed enables organizations to better serve their customers and compete more effectively in the market."
2.What are the benefits of DevOps? What can it help us to achieve?
Collaboration
Improved delivery
Security
Speed
Scale
Reliability
3.What are the anti-patterns of DevOps?
A couple of examples:
One person is in charge of specific tasks. For example there is only one person who is allowed to merge the code of everyone else into the repository.
Treating production differently from development environment. For example, not implementing security in development environment
Not allowing someone to push to production on Friday ;)
4.How would you describe a successful DevOps engineer or a team?
The answer can focus on:
Collaboration
Communication
Set up and improve workflows and processes (related to testing, delivery, ...)
Dealing with issues
Things to think about:
What DevOps teams or engineers should NOT focus on or do?
Do DevOps teams or engineers have to be innovative or practice innovation as part of their role?
Tooling:
5.What are you taking into consideration when choosing a tool/technology?
A few ideas to think about:
mature/stable vs. cutting edge
community size
architecture aspects - agent vs. agentless, master vs. masterless, etc.
learning curve
6.Can you describe which tool or platform you chose to use in some of the following areas and how?
CI/CD
Provisioning infrastructure
Configuration Management
Monitoring & alerting
Logging
Code review
Code coverage
Issue Tracking
Containers and Containers Orchestration
Tests
This is a more practical version of the previous question where you might be asked additional specific questions on the technology you chose
CI/CD - Jenkins, Circle CI, Travis, Drone, Argo CD, Zuul
Provisioning infrastructure - Terraform, CloudFormation
Configuration Management - Ansible, Puppet, Chef
Monitoring & alerting - Prometheus, Nagios
Logging - Logstash, Graylog, Fluentd
Code review - Gerrit, Review Board
Code coverage - Cobertura, Clover, JaCoCo
Issue tracking - Jira, Bugzilla
Containers and Containers Orchestration - Docker, Podman, Kubernetes, Nomad
Tests - Robot, Serenity, Gauge
7.A team member of yours, suggests to replace the current CI/CD platform used by the organization with a new one. How would you reply?
Things to think about:
What we gain from doing so? Are there new features in the new platform? Does the new platform deals with some of the limitations presented in the current platform?
What this suggestion is based on? In other words, did he/she tried out the new platform? Was there extensive technical research?
What does the switch from one platform to another will require from the organization? For example, training users who use the platform? How much time the team has to invest in such move?
Version Control:
8.What is Version Control?
Version control is the sytem of tracking and managing changes to software code.
It helps software teams to manage changes to source code over time.
Version control also helps developers move faster and allows software teams to preserve efficiency and agility as the team scales to include more developers.
9.What is a commit?
In Git, a commit is a snapshot of your repo at a specific point in time.
The git commit command will save all staged changes, along with a brief description from the user, in a “commit” to the local repository.
10.What is a merge?
Merging is Git's way of putting a forked history back together again. The git merge command lets you take the independent lines of development created by git branch and integrate them into a single branch.
11.What is a merge conflict?
A merge conflict is an event that occurs when Git is unable to automatically resolve differences in code between two commits. When all the changes in the code occur on different lines or in different files, Git will successfully merge commits without your help.
12.What best practices are you familiar with regarding version control?
Use a descriptive commit message
Make each commit a logical unit
Incorporate others' changes frequently
Share your changes frequently
Coordinate with your co-workers
Don't commit generated files
CICD
13.What is Continuous Integration?
A development practice where developers integrate code into a shared repository frequently. It can range from a couple of changes every day or a week to a couple of changes in one hour in larger scales.
Each piece of code (change/patch) is verified, to make the change is safe to merge. Today, it's a common practice to test the change using an automated build that makes sure the code can integrated. It can be one build which runs several tests in different levels (unit, functional, etc.) or several separate builds that all or some has to pass in order for the change to be merged into the repository.
14.What is Continuous Deployment?
A development strategy used by developers to release software automatically into production where any code commit must pass through an automated testing phase. Only when this is successful is the release considered production worthy. This eliminates any human interaction and should be implemented only after production-ready pipelines have been set with real-time monitoring and reporting of deployed assets. If any issues are detected in production it should be easy to rollback to previous working state.
15.Can you describe an example of a CI (and/or CD) process starting the moment a developer submitted a change/PR to a repository?
There are many answers for such a question, as CI processes vary, depending on the technologies used and the type of the project to where the change was submitted. Such processes can include one or more of the following stages:
Compile
Build
Install
Configure
Update
Test
An example of one possible answer:
A developer submitted a pull request to a project. The PR (pull request) triggered two jobs (or one combined job). One job for running lint test on the change and the second job for building a package which includes the submitted change, and running multiple api/scenario tests using that package. Once all tests passed and the change was approved by a maintainer/core, it's merged/pushed to the repository. If some of the tests failed, the change will not be allowed to merged/pushed to the repository.
A complete different answer or CI process, can describe how a developer pushes code to a repository, a workflow then triggered to build a container image and push it to the registry. Once in the registry, the k8s cluster is applied with the new changes.
16.What is Continuous Delivery?
A development strategy used to frequently deliver code to QA and Ops for testing. This entails having a staging area that has production like features where changes can only be accepted for production after a manual review. Because of this human entanglement there is usually a time lag between release and review making it slower and error prone as compared to continous deployment.
17.What CI/CD best practices are you familiar with? Or what do you consider as CI/CD best practice?
Automated process of building, testing and deploying software
Commit and test often
Testing/Staging environment should be a clone of production environment
18.You are given a pipeline and a pool with 3 workers: virtual machine, baremetal and a container. How will you decide on which one of them to run the pipeline?
19.Where do you store CI/CD pipelines? Why?
There are multiple approaches as to where to store the CI/CD pipeline definitions:
App Repository - store them in the same repository of the application they are building or testing (perhaps the most popular one)
Central Repository - store all organization's/project's CI/CD pipelines in one separate repository (perhaps the best approach when multiple teams test the same set of projects and they end up having many pipelines)
CI repo for every app repo - you separate CI related code from app code but you don't put everything in one place (perhaps the worst option due to the maintenance)
20.Would you prefer a "configuration->deployment" model or "deployment->configuration"? Why?
Both have advantages and disadvantages. With "configuration->deployment" model for example, where you build one image to be used by multiple deployments, there is less chance of deployments being different from one another, so it has a clear advantage of a consistent environment.
21.Explain mutable vs. immutable infrastructure
In mutable infrastructure paradigm, changes are applied on top of the existing infrastructure and over time the infrastructure builds up a history of changes. Ansible, Puppet and Chef are examples of tools which follow mutable infrastructure paradigm.
In immutable infrastructure paradigm, every change is actually a new infrastructure. So a change to a server will result in a new server instead of updating it. Terraform is an example of technology which follows the immutable infrastructure paradigm.
22.Explain "Software Distribution"
Read this fantastic article on the topic.
From the article: "Thus, software distribution is about the mechanism and the community that takes the burden and decisions to build an assemblage of coherent software that can be shipped."
23.Why are there multiple software distributions? What differences can they have?
Different distributions can focus on different things like: focus on different environments (server vs. mobile vs. desktop), support specific hardware, specialize in different domains (security, multimedia, ...), etc. Basically, different aspects of the software and what it supports, get different priority in each distribution.
24.What is a Software Repository?
Wikipedia: "A software repository, or “repo” for short, is a storage location for software packages. Often a table of contents is stored, as well as metadata."
Read more here
25.What ways are there to distribute software? What are the advantages and disadvantages of each method?
Source - Maintain build script within version control system so that user can build your app after cloning repository. Advantage: User can quickly checkout different versions of application. Disadvantage: requires build tools installed on users machine.
Archive - collect all your app files into one archive (e.g. tar) and deliver it to the user. Advantage: User gets everything he needs in one file. Disadvantage: Requires repeating the same procedure when updating, not good if there are a lot of dependencies.
Package - depends on the OS, you can use your OS package format (e.g. in RHEL/Fefodra it's RPM) to deliver your software with a way to install, uninstall and update it using the standard packager commands. Advantages: Package manager takes care of support for installation, uninstallation, updating and dependency management. Disadvantage: Requires managing package repository.
Images - Either VM or container images where your package is included with everything it needs in order to run successfully. Advantage: everything is preinstalled, it has high degree of environment isolation. Disadvantage: Requires knowledge of building and optimizing images.
26.What is caching? How does it works? Why is it important?
Caching is fast access to frequently used resources which are computationally expensive or IO intensive and do not change often. There can be several layers of cache that can start from CPU caches to distributed cache systems. Common ones are in memory caching and distributed caching.
Caches are typically data structures that contains some data, such as a hashtable or dictionary. However, any data structure can provide caching capabilities, like set, sorted set, sorted dictionary etc. While, caching is used in many applications, they can create subtle bugs if not implemented correctly or used correctly. For example,cache invalidation, expiration or updating is usually quite challenging and hard.
27.Explain stateless vs. stateful
Stateless applications don't store any data in the host which makes it ideal for horizontal scaling and microservices. Stateful applications depend on the storage to save state and data, typically databases are stateful applications.
28.What is Reliability? How does it fit DevOps?
Reliability, when used in DevOps context, is the ability of a system to recover from infrastructure failure or disruption. Part of it is also being able to scale based on your organization or team demands.
29.What "Availability" means? What means are there to track Availability of a service?
30.Describe the workflow of setting up some type of web server (Apache, IIS, Tomcat, ...)
31.How a web server works?
32.Describe me the architecture of service/app/project/... you designed and/or implemented
33.What types of tests are you familiar with?
Styling, unit, functional, API, integration, smoke, scenario, ...
34.You need to install periodically a package (unless it's already exists) on different operating systems (Ubuntu, RHEL, ...). How would you do it?
There are multiple ways to answer this question (there is no right and wrong here):
Simple cron job
Pipeline with configuration management technology (such Puppet, Ansible, Chef, etc.) ...
35.What is Chaos Engineering?
Wikipedia: "Chaos engineering is the discipline of experimenting on a software system in production in order to build confidence in the system's capability to withstand turbulent and unexpected conditions"
36.What is "infrastructure as code"? What implementation of IAC are you familiar with?
IAC (infrastructure as code) is a declerative approach of defining infrastructure or architecture of a system. Some implementations are ARM templates for Azure and Terraform that can work across multiple cloud providers.
37.How do you manage build artifacts?
Build artifacts are usually stored in a repository. They can be used in release pipelines for deployment purposes. Usually there is retention period on the build artifacts.
38.What deployment strategies are you familiar with or have used?
There are several deployment strategies:
* Rolling
* Blue green deployment
* Canary releases
* Recreate strategy
39.You joined a team where everyone developing one project and the practice is to run tests locally on their workstation and push it to the repository if the tests passed. What is the problem with the process as it is now and how to improve it?
40.Explain test-driven development (TDD)
41.Explain agile software development
42.What do you think about the following sentence?: "implementing or practicing DevOps leads to more secure software"
43.Do you know what is a "post-mortem meeting"? What is your opinion on that?
44.How do you perform plan capacity for your CI/CD resources? (e.g. servers, storage, etc.)
45.How would you structure/implement CD for an application which depends on several other applications?
46.How do you measure your CI/CD quality? Are there any metrics or KPIs you are using for measuring the quality?
47.What is a configuration drift? What problems is it causing?
Configuration drift happens when in an environment of servers with the exact same configuration and software, a certain server or servers are being applied with updates or configuration which other servers don't get and over time these servers become slightly different than all others.
This situation might lead to bugs which hard to identify and reproduce.
48.How to deal with a configuration drift?
Configuration drift can be avoided with desired state configuration (DSC) implementation. Desired state configuration can be a declarative file that defined how a system should be. There are tools to enforce desired state such a terraform or azure dsc. There are incramental or complete strategies.
49.Explain Declarative and Procedural styles. The technologies you are familiar with (or using) are using procedural or declarative style?
Declarative - You write code that specifies the desired end state Procedural - You describe the steps to get to the desired end state
Declarative Tools - Terraform, Puppet, CloudFormation Procedural Tools - Ansible, Chef
To better emphasize the difference, consider creating two virtual instances/servers. In declarative style, you would specify two servers and the tool will figure out how to reach that state. In procedural style, you need to specify the steps to reach the end state of two instances/servers - for example, create a loop and in each iteration of the loop create one instance (running the loop twice of course).
50.Do you have experience with testing cross-projects changes? (aka cross-dependency)
Note: cross-dependency is when you have two or more changes to separate projects and you would like to test them in mutual build instead of testing each change separately.
51.Have you contributed to an open source project? Tell me about this experience
52.What is Distributed Tracing?
53.What is GitOps?
GitLab: "GitOps is an operational framework that takes DevOps best practices used for application development such as version control, collaboration, compliance, and CI/CD tooling, and applies them to infrastructure automation".
54.What are the differences between SRE and DevOps?:
Google: "One could view DevOps as a generalization of several core SRE principles to a wider range of organizations, management structures, and personnel."
55.What SRE team is responsible for?
Google: "the SRE team is responsible for availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of their services"
56.What is an error budget?
Atlassian: "An error budget is the maximum amount of time that a technical system can fail without contractual consequences."
Read more about it here
57.What do you think about the following statement: "100% is the only right availability target for a system"
Wrong. No system can guarantee 100% availability as no system is safe from experiencing zero downtime. Many systems and services will fall somewhere between 99% and 100% uptime (or at least this is how most systems and services should be).
58.What are MTTF (mean time to failure) and MTTR (mean time to repair)? What these metrics help us to evaluate?
* MTTF (mean time to failure) other known as uptime, can be defined as how long the system runs before if fails.
* MTTR (mean time to recover) on the other hand, is the amount of time it takes to repair a broken system.
* MTBF (mean time between failures) is the amount of time between failures of the system.
59.What is the role of monitoring in SRE?
Google: "Monitoring is one of the primary means by which service owners keep track of a system’s health and availability"
60.What is Jenkins? What have you used it for?
Jenkins is an open source automation tool written in Java with plugins built for Continuous Integration purpose. Jenkins is used to build and test your software projects continuously making it easier for developers to integrate changes to the project, and making it easier for users to obtain a fresh build. It also allows you to continuously deliver your software by integrating with a large number of testing and deployment technologies.
Jenkins integrates development life-cycle processes of all kinds, including build, document, test, package, stage, deploy, static analysis and much more.
61.What are the advantages of Jenkins over its competitors? Can you compare it to one of the following systems?
Travis
Bamboo
Teamcity
CircleCI
62.What are the limitations or disadvantages of Jenkins?
This might be considered to be an opinionated answer:
Old fashioned dashboards with not many options to customize it
Containers readiness (this has improved with Jenkins X)
By itself, it doesn't have many features. On the other hand, there many plugins created by the community to expand its abilities
Managing Jenkins and its piplines as a code can be one hell of a nightmare
63.Explain the following:
Job
Build
Plugin
Node or Worker
Executor
Job is an automation definition = what and where to execute once the user clicks on "build"
Build is a running instance of a job. You can have one or more builds at any given point of time (unless limited by confiugration)
A worker is the machine/instance on which the build is running. When a build starts, it "acquires" a worker out of a pool to run on it.
An executor is variable of the worker, defining how many builds can run on that worker in parallel. An executor value of 3 means, that 3 builds can run at any point on that executor (not necessarily of the same job. Any builds)
64.What plugins have you used in Jenkins?
65.Have you used Jenkins for CI or CD processes? Can you describe them?
66.What type of jobs are there? Which types have you used?
67.How did you report build results to users? What ways are there to report the results?
You can report via:
Emails
Messaging apps
Dashboards
Each has its own disadvantages and advantages. Emails for example, if sent too often, can be eventually disregarded or ignored.
68.You need to run unit tests every time a change submitted to a given project. Describe in details how your pipeline would look like and what will be executed in each stage
The pipelines will have multiple stages:
Clone the project
Install test dependencies (for example, if I need tox package to run the tests, I will install it in this stage)
Run unit tests
(Optional) report results (For example an email to the users)
Archive the relevant logs/files
69.How to secure Jenkins?
70.Describe how do you add new nodes (agents) to Jenkins
You can describe the UI way to add new nodes but better to explain how to do in a way that scales like a script or using dynamic source for nodes like one of the existing clouds.
71.How to acquire multiple nodes for one specific build?
72.Whenever a build fails, you would like to notify the team owning the job regarding the failure and provide failure reason. How would you do that?
73.If you are managing a dozen of jobs, you can probably use the Jenkins UI. But how do you manage the creation and deletion of hundreds of jobs every week/month?
74.What are some of Jenkins limitations?
Testing cross-dependencies (changes from multiple projects together)
Starting builds from any stage (although cloudbees implemented something called checkpoints)
75.How would you implement an option of a starting a build from a certain stage and not from the beginning?
76.Do you have experience with developing a Jenkins plugin? Can you describe this experience?
77.Have you written Jenkins scripts? If yes, what for and how they work?
Cloud:
78.What is Cloud Computing? What is a Cloud Provider?
79.What are the advantages of cloud computing? Mention at least 3 advantages
Pay as you go (or consumption-based payment) - you are paying only for what you are using. No upfront payments and payment stops when resources are no longer used.
Scalable - resources are scaled down or up based on demand
80.What types of Cloud Computing services are there?
IAAS - Infrastructure as a Service PAAS - Platform as a Service SAAS - Software as a Service
81.Explain each of the following and give an example:
IAAS
PAAS
SAAS
IAAS - Users have control over complete Operating System and don't need to worry about the physical resources, which is managed by Cloud Service Provider.
PAAS - CLoud Service Provider takes care of Operating System, Middlewares and users only need to focus on our Data and Application.
SAAS - A cloud based method to provide software to users, software logics running on cloud, can be run on-premises or managed by Cloud Service Provider.
82.What types of clouds (or cloud deployments) are there?
Public
Hybrid
Private
83.Explain each of the following Cloud Computing Deployments:
Public
Private
Hybrid
Public - Cloud services sharing computing resources among multiple customers
Private - Cloud services having computing resources limited to specific customer or organization, managed by third party or organizations itself
Hybrid - Combination of public and private clouds
84.What are the differences between Cloud Providers and On-Premise solution?
In cloud providers, someone else owns and manages the hardware, hire the relevant infrastructure teams and pays for real-estate (for both hardware and people). You can focus on your business.
In On-Premise solution, it's quite the opposite. You need to take care of hardware, infrastructure teams and pay for everything which can be quite expensive. On the other hand it's tailored to your needs.
85.What is Serverless Computing?
The main idea behind serverless computing is that you don't need to manage the creation and configuration of server. All you need to focus on is splitting your app into multiple functions which will be triggered by some actions.
It's important to note that:
Serverless Computing is still using servers. So saying there are no servers in serverless computing is completely wrong
Serverless Computing allows you to have a different paying model. You basically pay only when your functions are running and not when the VM or containers are running as in other payment models
86.Can we replace any type of computing on servers with serverless?
87.Is there a difference between managed service to SaaS or is it the same thing?
AWS
88.AWS Global Infrastructure
Explain the following
Availability zone
Region
Edge location
89.True or False? Each AWS region is designed to be completely isolated from the other AWS regions
True.
90.Do you agree with the statement "AWS region should be chosen based on proximity alone"?
Note: opinionated answer.
No. There are a couple of factors to consider when choosing a region (order doesn't mean anything):
Cost - regions vary in cost and AWS Price List API can assist in calculating the difference in cost between the different regions.
Speed
Features
AWS IAM
91.What is IAM? What are some of its features?
Full explanation is here In short: it's used for managing users, groups, access policies & roles
92.True or False? IAM configuration is defined globally and not per region
True
93.Given an example of IAM best practices?
Set up MFA
Delete root account access keys
Create IAM users instead of using root for daily management
94.What are Roles?
A way for allowing a service of AWS to use another service of AWS. You assign roles to AWS resources. For example, you can make use of a role which allows EC2 service to acesses s3 buckets (read and write).
95.What are Policies?
Policies documents used to give permissions as to what a user, group or role are able to do. Their format is JSON.
96.A user is unable to access an s3 bucket. What might be the problem?
There can be several reasons for that. One of them is lack of policy. To solve that, the admin has to attach the user with a policy what allows him to access the s3 bucket.
97.What should you use to:
Grant access between two services/resources?
Grant user access to resources/services?
Role
Policy
98.What permissions does a new user have?
Only a login access.
AWS Compute:
99.What is EC2?
"a web service that provides secure, resizable compute capacity in the cloud". Read more here
100.What is AMI?
Amazon Machine Images is "An Amazon Machine Image (AMI) provides the information required to launch an instance". Read more here
101.What are the different source for AMIs?
Personal AMIs - AMIs you create
AWS Marketplace for AMIs - Paid AMIs usually with bundled with licensed software
Community AMIs - Free
102.What is instance type?
"the instance type that you specify determines the hardware of the host computer used for your instance" Read more about instance types here
103.True or False? The following are instance types available for a user in AWS:
Compute optimizied
Network optimizied
Web optimized
False. From the above list only compute optimized is available.
104.What is EBS?
"provides block level storage volumes for use with EC2 instances. EBS volumes behave like raw, unformatted block devices." More on EBS here
105.What EC2 pricing models are there?
On Demand - pay a fixed rate by the hour/second with no commitment. You can provision and terminate it at any given time. Reserved - you get capacity reservation, basically purchase an instance for a fixed time of period. The longer, the cheaper. Spot - Enables you to bid whatever price you want for instances or pay the spot price. Dedicated Hosts - physical EC2 server dedicated for your use.
106.What are Security Groups?
"A security group acts as a virtual firewall that controls the traffic for one or more instances" More on this subject here
107.How to migrate an instance to another availability zone?
108.What can you attach to an EC2 instance in order to store data?
EBS
109.Standard RI - most significant discount + suited for steady-state usage Convertible RI - discount + change attribute of RI + suited for steady-state usage Scheduled RI - launch within time windows you reserve
Learn more about EC2 RI here
AWS Serverless Compute
110.xplain what is AWS Lambda
AWS definition: "AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume."
111.True or False? In AWS Lambda, you are charged as long as a function exists, regardless of whether it's running or not
False. Charges are being made when the code is executed.
112.Which of the following set of languages Lambda supports?
R, Swift, Rust, Kotlin
Python, Ruby, Go
Python, Ruby, PHP
Python, Ruby, Go
AWS Containers:
113.What is Amazon ECS?
Amazon definition: "Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service. Customers such as Duolingo, Samsung, GE, and Cook Pad use ECS to run their most sensitive and mission critical applications because of its security, reliability, and scalability."
114.What is Amazon ECR?
Amazon definition: "Amazon Elastic Container Registry (ECR) is a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images."
Learn more here
115.What is AWS Fargate?
Amazon definition: "AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS)."
Learn more here
AWS Storage:
116.Explain what is AWS S3?
S3 stands for 3 S, Simple Storage Service. S3 is a object storage service which is fast, scalable and durable. S3 enables customers to upload, download or store any file or object that is up to 5 TB in size.
117.What is a bucket?
An S3 bucket is a resource which is similar to folders in a file system and allows storing objects, which consist of data.
118.True or False? A bucket name must be globally unique
True
119.Explain folders and objects in regards to buckets
Folder - any sub folder in an s3 bucket
Object - The files which are stored in a bucket
120.Explain the following:
Object Lifecycles
Object Sharing
Object Versioning
Object Lifecycles - Transfer objects between storage classes based on defined rules of time periods
Object Sharing - Share objects via a URL link
Object Versioning - Manage multiple versions of an object
121.Explain Object Durability and Object Availability
Object Durability: The percent over a one-year time period that a file will not be lost Object Availability: The percent over a one-year time period that a file will be accessibl
122.What is a storage class? What storage classes are there?
Each object has a storage class assigned to, affecting its availability and durability. This also has effect on costs. Storage classes offered today:
Standard:
Used for general, all-purpose storage (mostly storage that needs to be accessed frequently)
The most expensive storage class
11x9% durability
2x9% availability
Default storage class
Standard-IA (Infrequent Access)
Long lived, infrequently accessed data but must be available the moment it's being accessed
11x9% durability
99.90% availability
One Zone-IA (Infrequent Access):
Long-lived, infrequently accessed, non-critical data
Less expensive than Standard and Standard-IA storage classes
2x9% durability
99.50% availability
Intelligent-Tiering:
Long-lived data with changing or unknown access patterns. Basically, In this class the data automatically moves to the class most suitable for you based on usage patterns
Price depends on the used class
11x9% durability
99.90% availability
Glacier: Archive data with retrieval time ranging from minutes to hours
Glacier Deep Archive: Archive data that rarely, if ever, needs to be accessed with retrieval times in hours
Both Glacier and Glacier Deep Archive are:
The most cheap storage classes
have 9x9% durability
113.A customer would like to move data which is rarely accessed from standard storage class to the most cheapest class there is. Which storage class should be used?
One Zone-IA
Glacier Deep Archive
Intelligent-Tiering
Glacier Deep Archive
114.What Glacier retrieval options are available for the user?
Expedited, Standard and Bulk
115.True or False? Each AWS account can store up to 500 PetaByte of data. Any additional storage will cost double
False. Unlimited capacity.
116.Explain what is Storage Gateway
"AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage". More on Storage Gateway here
117.Explain the following Storage Gateway deployments types
File Gateway
Volume Gateway
Tape Gateway
Explained in detail here
118.What is the difference between stored volumes and cached volumes?
Stored Volumes - Data is located at customer's data center and periodically backed up to AWS Cached Volumes - Data is stored in AWS cloud and cached at customer's data center for quick access
119.What is "Amazon S3 Transfer Acceleration"?
AWS definition: "Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket"
120.Explain data consistency
121.Can you host dynamic websites on S3? What about static websites?
122.What security measures have you taken in context of S3?
123.What storage options are there for EC2 Instances?
124.What is Amazon EFS?
Amazon definition: "Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources."
125.What is AWS Snowmobile?
"AWS Snowmobile is an Exabyte-scale data transfer service used to move extremely large amounts of data to AWS."
AWS Disaster Recovery
126.In regards to disaster recovery, what is RTO and RPO?
RTO - The maximum acceptable length of time that your application can be offline.
RPO - The maximum acceptable length of time during which data might be lost from your application due to an incident.
127.What types of disaster recovery techniques AWS supports?
The Cold Method - Periodically backups and sending the backups off-site
Pilot Light - Data is mirrored to an environment which is always running
Warm Standby - Running scaled down version of production environment
Multi-site - Duplicated environment that is always running
128.Which disaster recovery option has the highest downtime and which has the lowest?
Lowest - Multi-site Highest - The cold method
AWS Cloudfront
129.Explain what is CloudFront
AWS definition: "Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment."
130.Explain the following
Origin
Edge location
Distribution
131.What delivery methods available for the user with CDN?
132.True or False?. Objects are cached for the life of TTL
True
133.What is AWS Snowball?
A transport solution which was designed for transferring large amounts of data (petabyte-scale) into and out the AWS cloud
AWS ELB
134.What is auto scaling?
AWS definition: "AWS Auto Scaling monitors your applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost"
135.True or False? Auto Scaling is about adding resources (such as instances) and not about removing resource
False. Auto scaling adjusts capacity and this can mean removing some resources based on usage and performances.
136.What types of load balancers are supported in EC2 and what are they used for?
Application LB - layer 7 traffic
Network LB - ultra-high performances or static IP address
Classic LB - low costs, good for test or dev environments
AWS Security
137.What is the shared responsibility model? What AWS is responsible for and what the user is responsible for based on the shared responsibility model?
The shared responsibility model defines what the customer is responsible for and what AWS is responsible for.
138.True or False? Based on the shared responsibility model, Amazon is responsible for physical CPUs and security groups on instances
False. It is responsible for Hardware in its sites but not for security groups which created and managed by the users.
139.True or False? Based on the shared responsibility model, Amazon is responsible for physical CPUs and security groups on instances
False. It is responsible for Hardware in its sites but not for security groups which created and managed by the users.
140. Explain "Shared Controls" in regards to the shared responsibility model
AWS definition: "apply to both the infrastructure layer and customer layers, but in completely separate contexts or perspectives. In a shared control, AWS provides the requirements for the infrastructure and the customer must provide their own control implementation within their use of AWS services"
141.What is the AWS compliance program?
142.What is AWS Artifact?
AWS definition: "AWS Artifact is your go-to, central resource for compliance-related information that matters to you. It provides on-demand access to AWS’ security and compliance reports and select online agreements."
143.What is AWS Inspector?
AWS definition: "Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices.""
144.What is AWS Guarduty?
145.What is AWS Shield?
AWS definition: "AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS."
146.What is AWS WAF? Give an example of how it can used and describe what resources or services you can use it with
147.What AWS VPN is used for?
148.What is the difference between Site-to-Site VPN and Client VPN?
149.What is AWS CloudHSM?
Amazon definition: "AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud."
150.True or False? AWS Inspector can perform both network and host assessments
True
151.What is AWS Key Management Service (KMS)?
AWS definition: "KMS makes it easy for you to create and manage cryptographic keys and control their use across a wide range of AWS services and in your applications." More on KMS here
152.What is AWS Acceptable Use Policy?
It describes prohibited uses of the web services offered by AWS. More on AWS Acceptable Use Policy here
153.True or False? A user is not allowed to perform penetration testing on any of the AWS services
False. On some services, like EC2, CloudFront and RDS, penetration testing is allowed.
154.True or False? DDoS attack is an example of allowed penetration testing activity
False.
155.True or False? AWS Access Key is a type of MFA device used for AWS resources protection
False. Security key is an example of an MFA device.
156.What is Amazon Cognito?
Amazon definition: "Amazon Cognito handles user authentication and authorization for your web and mobile apps."
157.What is AWS ACM?
Amazon definition: "AWS Certificate Manager is a service that lets you easily provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services and your internal connected resources."
AWS Databases
158.What is AWS RDS?
159.Explain "Point-in-Time Recovery" feature in DynamoDB
Amazon definition: "You can create on-demand backups of your Amazon DynamoDB tables, or you can enable continuous backups using point-in-time recovery. For more information about on-demand backups, see On-Demand Backup and Restore for DynamoDB."
160.Explain "Global Tables" in DynamoDB
Amazon definition: "A global table is a collection of one or more replica tables, all owned by a single AWS account."
161.What is DynamoDB Accelerator?
Amazon definition: "Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement – from milliseconds to microseconds..."
Learn more here
162.What is AWS Redshift and how is it different than RDS?
cloud data warehouse
163.What do you if you suspect AWS Redshift performs slowly?
You can confirm your suspicion by going to AWS Redshift console and see running queries graph. This should tell you if there are any long-running queries.
If confirmed, you can query for running queries and cancel the irrelevant queries
Check for connection leaks (query for running connections and include their IP)
Check for table locks and kill irrelevant locking sessions
164.What is AWS ElastiCache? For what cases is it used?
Amazon Elasticache is a fully managed Redis or Memcached in-memory data store. It's great for use cases like two-tier web applications where the most frequently accesses data is stored in ElastiCache so response time is optimal.
165.What is Amazon Aurora
A MySQL & Postgresql based relational database. Also, the default database proposed for the user when using RDS for creating a database. Great for use cases like two-tier web applications that has a MySQL or Postgresql database layer and you need automated backups for your application.
166.What is Amazon DocumentDB?
Amazon definition: "Amazon DocumentDB (with MongoDB compatibility) is a fast, scalable, highly available, and fully managed document database service that supports MongoDB workloads. As a document database, Amazon DocumentDB makes it easy to store, query, and index JSON data."
167.What "AWS Database Migration Service" is used for?
168.What type of storage is used by Amazon RDS?
EBS
169.Explain Amazon RDS Read Replicas
AWS definition: "Amazon RDS Read Replicas provide enhanced performance and durability for RDS database (DB) instances. They make it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads." Read more about here
AWS Networking:
170.What is VPC?
"A logically isolated section of the AWS cloud where you can launch AWS resources in a virtual network that you define" Read more about it here.
171.True or False? VPC spans multiple regions
False
172.True or False? Subnets belong to the same VPC, can be in different availability zones
True. Just to clarify, a single subnet resides entirely in one AZ.
173.What is an Internet Gateway?
"component that allows communication between instances in your VPC and the internet" (AWS docs). Read more about it here
174.True or False? NACL allow or deny traffic on the subnet level
True
175.True or False? Multiple Internet Gateways can be attached to one VPC
False. Only one internet gateway can be attached to a single VPC.
176.What is an Elastic IP address?
177.True or False? Route Tables used to allow or deny traffic from the internet to AWS instances
False.
178.Explain Security Groups and Network ACLs
NACL - security layer on the subnet level.
Security Group - security layer on the instance level.
Read more about it here and here
179.What is AWS Direct Connect?
Allows you to connect your corporate network to AWS network.
AWS - Identify the service or tool
180.What would you use for automating code/software deployments?
AWS CodeDeploy
181.What would you use for easily creating similar AWS environments/resources for different customers?
CloudFormation
182.Using which service, can you add user sign-up, sign-in and access control to mobile and web apps?
Cognito
183.Which service would you use for building a website or web application?
Lightsail
184.Which tool would you use for choosing between Reserved instances or On-Demand instances?
Cost Explorer
185.What would you use to check how many unassociated Elastic IP address you have?
Trusted Advisor
186.Which service allows you to transfer large amounts (Petabytes) of data in and out of the AWS cloud?
AWS Snowball
187.Which service provides a virtual network dedicated to your AWS account?
VPC
188.What you would use for having automated backups for an application that has MySQL database layer?
Amazon Aurora
189.What would you use to migrate on-premise database to AWS?
AWS Database Migration Service (DMS)
190.What would you use to check why certain EC2 instances were terminated?
AWS CloudTrail
191.What would you use for SQL database?
AWS RDS
192.What would you use for NoSQL database?
AWS DynamoDB
193.What would you use for adding image and video analysis to your application?
AWS Rekognition
194.Which service would you use for debugging and improving performances issues with your applications?
AWS X-Ray
195.Which service is used for sending notifications?
SNS
196.What would you use for running SQL queries interactively on S3?
AWS Athena
197.Which service would you use for monitoring malicious activity and unauthorized behavior in regards to AWS accounts and workloads?
Amazon GuardDuty
198.Which service would you use for centrally manage billing, control access, compliance, and security across multiple AWS accounts?
AWS Organizations
199.Which service would you use for web application protection?
AWS WAF
200.You would like to monitor some of your resources in the different services. Which service would you use for that?
CloudWatch
201.Which service would you use for performing security assessment?
AWS Inspector
202.Which service would you use for creating DNS record?
Route 53
203.What would you use if you need a fully managed document database?
Amazon DocumentDB
204.Which service would you use to add access control (or sign-up, sign-in forms) to your web/mobile apps?
AWS Cognito
205.Which service would you use if you need messaging queue?
Simple Queue Service (SQS)
206.Which service would you use if you need managed DDOS protection?
AWS Shield
207.Which service would you use if you need store frequently used data for low latency access?
ElastiCache
208.What would you use to transfer files over long distances between a client and an S3 bucket?
Amazon S3 Transfer Acceleration
209.Which service would you use for distributing incoming requests across multiple?
Route 53
AWS DNS
210.What is Route 53?
"Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service..." Some of Route 53 features:
Register domain
DNS service - domain name translations
Health checks - verify your app is available
AWS Monitoring and Logging:
211.What is AWS CloudWatch?
AWS definition: "Amazon CloudWatch is a monitoring and observability service..."
212.What is AWS CloudTrail?
AWS definition: "AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account."
Read more on CloudTrail here
213.What is Simply Notification Service?
AWS definition: "a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications."
Read more about it here
214.Explain the following in regards to SNS:
Topics
Subscribers
Publishers
Topics - used for grouping multiple endpoints
Subscribers - the endpoints where topics send messages to
Publishers - the provider of the message (event, person, ...)
AWS Billing and Support:
215.What is AWS Organizations?
AWS definition: "AWS Organizations helps you centrally govern your environment as you grow and scale your workloads on AWS." More on Organizations here
216.What are Service Control Policies and to what service they belong?
AWS organizations service and the definition by Amazon: "SCPs offer central control over the maximum available permissions for all accounts in your organization, allowing you to ensure your accounts stay within your organization’s access control guidelines."
217.Explain AWS pricing model
It mainly works on "pay-as-you-go" meaning you pay only for what are using and when you are using it. In s3 you pay for 1. How much data you are storing 2. Making requests (PUT, POST, ...) In EC2 it's based on the purchasing option (on-demand, spot, ...), instance type, AMI type and the region used.
More on AWS pricing model here
218.How one should estimate AWS costs when for example comparing to on-premise solutions?
TCO calculator
AWS simple calculator
Cost Explorer
219.What basic support in AWS includes?
24x7 customer service
Trusted Advisor
AWS personal Health Dashoard
220.How are EC2 instances billed?
221.What AWS Pricing Calculator is used for?
222.What is Amazon Connect?
Amazon definition: "Amazon Connect is an easy to use omnichannel cloud contact center that helps companies provide superior customer service at a lower cost."
223.What are "APN Consulting Partners"?
Amazon definition: "APN Consulting Partners are professional services firms that help customers of all types and sizes design, architect, build, migrate, and manage their workloads and applications on AWS, accelerating their journey to the cloud."
Learn more here
224.Which of the following are AWS accounts types (and are sorted by order)?
Basic, Developer, Business, Enterprise
Newbie, Intermediate, Pro, Enterprise
Developer, Basic, Business, Enterprise
Beginner, Pro, Intermediate Enterprise
Basic, Developer, Business, Enterprise
225.True or False? Region is a factor when it comes to EC2 costs/pricing
True. You pay differently based on the chosen region.
226.What is "AWS Infrastructure Event Management"?
AWS Definition: "AWS Infrastructure Event Management is a structured program available to Enterprise Support customers (and Business Support customers for an additional fee) that helps you plan for large-scale events such as product or application launches, infrastructure migrations, and marketing events."
AWS Automation:
227.What is AWS CodeDeploy?
Amazon definition: "AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers."
Learn more here
228.Explain what is CloudFormation
AWS Misc:
229. Which AWS service you have experience with that you think is not very common?
230.What is AWS CloudSearch?
231.What is AWS Lightsail?
AWS definition: "Lightsail is an easy-to-use cloud platform that offers you everything needed to build an application or website, plus a cost-effective, monthly plan."
232.What is AWS Rekognition?
AWS definition: "Amazon Rekognition makes it easy to add image and video analysis to your applications using proven, highly scalable, deep learning technology that requires no machine learning expertise to use."
Learn more here
233.What AWS Resource Groups used for?
Amazon definition: "You can use resource groups to organize your AWS resources. Resource groups make it easier to manage and automate tasks on large numbers of resources at one time. "
Learn more here
234.What is AWS Global Accelerator?
Amazon definition: "AWS Global Accelerator is a service that improves the availability and performance of your applications with local or global users..."
Learn more here
235.What is AWS Config?
Amazon definition: "AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources."
236.What is AWS X-Ray?
AWS definition: "AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture." Learn more here
237.What is AWS OpsWorks?
Amazon definition: "AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet."
238.What is AWS Athena?
"Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL."
239.What is Amazon Cloud Directory?
Amazon definition: "Amazon Cloud Directory is a highly available multi-tenant directory-based store in AWS. These directories scale automatically to hundreds of millions of objects as needed for applications."
240.What is AWS Elastic Beanstalk?
AWS definition: "AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services...You can simply upload your code and Elastic Beanstalk automatically handles the deployment"
241.Amazon definition: "Amazon SWF helps developers build, run, and scale background jobs that have parallel or sequential steps. You can think of Amazon SWF as a fully-managed state tracker and task coordinator in the Cloud."
242.What is AWS EMR?
AWS definition: "big data platform for processing vast amounts of data using open source tools such as Apache Spark, Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto."
243.What is AWS Quick Starts?
AWS definition: "Quick Starts are built by AWS solutions architects and partners to help you deploy popular technologies on AWS, based on AWS best practices for security and high availability."
244.What is the Trusted Advisor?
245.What is AWS Service Catalog?
Amazon definition: "AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS."
246.What is AWS CAF?
Amazon definition: "AWS Professional Services created the AWS Cloud Adoption Framework (AWS CAF) to help organizations design and travel an accelerated path to successful cloud adoption. "
247.What is AWS Cloud9?
AWS definition: "AWS Cloud9 is a cloud-based integrated development environment (IDE) that lets you write, run, and debug your code with just a browser"
248.What is AWS Application Discovery Service?
Amazon definition: "AWS Application Discovery Service helps enterprise customers plan migration projects by gathering information about their on-premises data centers."
249.What is the AWS well-architected framework and what pillars it's based on?
AWS definition: "The Well-Architected Framework has been developed to help cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications. Based on five pillars — operational excellence, security, reliability, performance efficiency, and cost optimization"
250.What AWS services are serverless (or have the option to be serverless)?
AWS Lambda AWS Athena
251.What is Simple Queue Service (SQS)?
AWS definition: "Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications".
Network:
252.What is Ethernet?
Ethernet simply refers to the most common type of Local Area Network (LAN) used today. A LAN—in contrast to a WAN (Wide Area Network), which spans a larger geographical area—is a connected network of computers in a small area, like your office, college campus, or even home.
253.What is TCP/IP?
A set of protocols that define how two or more devices can communicate with each other. To learn more about TCP/IP, read here
254.What is a MAC address? What is it used for?
A MAC address is a unique identification number or code used to identify individual devices on the network.
Packets that are sent on the ethernet are always coming from a MAC address and sent to a MAC address. If a network adapter is receiving a packet, it is comparing the packet’s destination MAC address to the adapter’s own MAC address.
255.When is this MAC address used?: ff:ff:ff:ff:ff:ff
256.An Internet Protocol address (IP address) is a numerical label assigned to each device connected to a computer network that uses the Internet Protocol for communication.An IP address serves two main functions: host or network interface identification and location addressing.
257.Explain subnet mask and given an example
A Subnet mask is a 32-bit number that masks an IP address, and divides the IP address into network address and host address. Subnet Mask is made by setting network bits to all "1"s and setting host bits to all "0"s. Within a given network, two host addresses are reserved for special purpose, and cannot be assigned to hosts. The "0" address is assigned a network address and "255" is assigned to a broadcast address, and they cannot be assigned to hosts.
For Example
| Address Class | No of Network Bits | No of Host Bits | Subnet mask | CIDR notation |
| ------------- | ------------------ | --------------- | --------------- | ------------- |
| A | 8 | 24 | 255.0.0.0 | /8 |
| A | 9 | 23 | 255.128.0.0 | /9 |
| A | 12 | 20 | 255.240.0.0 | /12 |
| A | 14 | 18 | 255.252.0.0 | /14 |
| B | 16 | 16 | 255.255.0.0 | /16 |
| B | 17 | 15 | 255.255.128.0 | /17 |
| B | 20 | 12 | 255.255.240.0 | /20 |
| B | 22 | 10 | 255.255.252.0 | /22 |
| C | 24 | 8 | 255.255.255.0 | /24 |
| C | 25 | 7 | 255.255.255.128 | /25 |
| C | 28 | 4 | 255.255.255.240 | /28 |
| C | 30 | 2 | 255.255.255.252 | /30 |
258.What is a private IP address? In which scenarios/system designs, one should use it?
259.What is a public IP address? In which scenarios/system designs, one should use it?
260.Explain the OSI model. What layers there are? What each layer is responsible for?
Application: user end (HTTP is here)
Presentation: establishes context between application-layer entities (Encryption is here)
Session: establishes, manages and terminates the connections
Transport: transfers variable-length data sequences from a source to a destination host (TCP & UDP are here)
Network: transfers datagrams from one network to another (IP is here)
Data link: provides a link between two directly connected nodes (MAC is here)
Physical: the electrical and physical spec the data connection (Bits are here)
261.For each of the following determine to which OSI layer it belongs:
Error correction
Packets routing
Cables and electrical signals
MAC address
IP address
Terminate connections
3 way handshake
Error correction
Packets routing - Network
Cables and electrical signals - Physical
MAC address - Data link
IP address - Network
Terminate connections - Session
3 way handshake - Transport
262.What delivery schemes are you familiar with?
Unitcast: One to one communication where there is one sender and one receiver.
Broadcast: Sending a message to everyone in the network. The address ff:ff:ff:ff:ff:ff is used for broadcasting. Two common protocols which use broadcast are ARP and DHCP.
Multicast: Sending a message to a group of subscribers. It can be one-to-many or many-to-many.
263.What is CSMA/CD? Is it used in modern ethernet networks?
CSMA/CD stands for Carrier Sense Multiple Access / Collision Detection. Its primarily focus it to manage access to shared medium/bus where only one host can transmit at a given point of time.
CSMA/CD algorithm:
Before sending a frame, it checks whether another host already transmitting a frame.
If no one transmitting, it starts transmitting the frame.
If two hosts transmitted at the same time, we have a collision.
Both hosts stop sending the frame and they send to everyone a 'jam signal' notifying everyone that a collision occurred
They are waiting for a random time before sending again
Once each host waited for a random time, they try to send the frame again and so the
263. Describe the following network devices and the difference between them:
router
switch
hub
264.How does a router works?
A router is a physical or virtual appliance that passes information between two or more packet-switched computer networks. A router inspects a given data packet's destination Internet Protocol address (IP address), calculates the best way for it to reach its destination and then forwards it accordingly.
265.What is NAT?
Network Address Translation (NAT) is a process in which one or more local IP address is translated into one or more Global IP address and vice versa in order to provide Internet access to the local hosts.
266.What is a proxy? How does it works? What do we need it for?
A proxy server acts as a gateway between you and the internet. It’s an intermediary server separating end users from the websites they browse.
If you’re using a proxy server, internet traffic flows through the proxy server on its way to the address you requested. The request then comes back through that same proxy server (there are exceptions to this rule), and then the proxy server forwards the data received from the website to you.
roxy servers provide varying levels of functionality, security, and privacy depending on your use case, needs, or company policy.
267.What is TCP? How does it works? What is the 3 way handshake?
TCP 3-way handshake or three-way handshake is a process which is used in a TCP/IP network to make a connection between server and client.
A three-way handshake is primarily used to create a TCP socket connection. It works when:
A client node sends a SYN data packet over an IP network to a server on the same or an external network. The objective of this packet is to ask/infer if the server is open for new connections.
The target server must have open ports that can accept and initiate new connections. When the server receives the SYN packet from the client node, it responds and returns a confirmation receipt – the ACK packet or SYN/ACK packet.
The client node receives the SYN/ACK from the server and responds with an ACK packet.
268.What is round-trip delay or round-trip time?
From wikipedia: "the length of time it takes for a signal to be sent plus the length of time it takes for an acknowledgement of that signal to be received"
Bonus question: what is the RTT of LAN?
269.How does SSL handshake work?
270.What is the difference between TCP and UDP?
TCP establishes a connection between the client and the server to guarantee the order of the packages, on the other hand, UDP does not establish a connection between client and server and doesn't handle package order. This makes UDP more lightweight than TCP and a perfect candidate for services like streaming.
271.What TCP/IP protocols are you familiar with?
272.Explain "default gateway"
A default gateway serves as an access point or IP router that a networked computer uses to send information to a computer in another network or the internet.
273.What is ARP? How does it works?
ARP stands for Address Resolution Protocol. When you try to ping an IP address on your local network, say 192.168.1.1, your system has to turn the IP address 192.168.1.1 into a MAC address. This involves using ARP to resolve the address, hence its name.
Systems keep an ARP look-up table where they store information about what IP addresses are associated with what MAC addresses. When trying to send a packet to an IP address, the system will first consult this table to see if it already knows the MAC address. If there is a value cached, ARP is not used.
274.What is TTL? What does it helps to prevent?
275. What is DHCP? How does it works?
It stands for Dynamic Host Configuration Protocol, and allocates IP addresses, subnet masks and gateways to hosts. This is how it works:
A host upon entering a network, broadcasts a message in search of a DHCP server (DHCP DISCOVER)
An offer message is sent back by the DHCP server as a packet containing lease time, subnet mask, IP addresses, etc (DHCP OFFER)
Depending on which offer accepted, the client sends back a reply broadcast letting all DHCP servers know (DHCP REQUEST)
Server sends an acknowledgment (DHCP ACK)
276.What is SSL tunneling? How does it works?
277.What is a socket? Where can you see the list of sockets in your system?
278.What is IPv6? Why should we consider using it if we have IPv4?
279.What is VLAN?
280.What is MTU?
281.What happens if you send a packet that is bigger than the MTU?
282.True or False?. Ping is using UDP because it doesn't care about reliable connection
283.What is SDN?
284.What is ICMP? What is it used for?
285.What is NAT? How does it works?
286.Which factors affect network performances
287.What the terms "Data Plane" and "Control Plane" refer?
The exact meaning is usually depends on the context but overall data plane refers to all the functions that forward packets and/or frames from one interface to another while control plane refers to all the functions that make use of routing protocols.
There is also "Management Plane" which refers to monitoring and management functions.
288.Explain Spanning Tree Protocol (STP)
289.What is link aggregation? Why is it used?
290.What is link aggregation? Why is it used?
291.What is Asymmetric Routing? How do deal with it?
292.What overlay (tunnel) protocols are you familiar with?
293.What is GRE? How does it works?
294.What is VXLAN? How does it works?
295.What is SNAT?
296.Explain OSPF
297. What is latency?
298. What is bandwidth?
299.What is throughput?
300.When performing a search query, what is more important, latency or throughput? And how to assure that what managing global infrastructure?
Latency. To have a good latency, a search query should be forwarded to the closest datacenter.
301.When uploading a video, what is more important, latency or throughput? And how to assure that?
Throughput. To have a good throughput, the upload stream should be routed to an underutilized link.
302.What other considerations (except latency and throughput) are there when forwarding requests?
Keep caches updated (which means the request could be forwarded not to the closest datacenter)
303.Explain Spine & Leaf
304.What is Network Congestion? What can cause it?
305.What can you tell me about UDP packet format? What about TCP packet format? How is it different?
306.What is the exponential backoff algorithm? Where is it used?
307.Using Hamming code, what would be the code word for the following data word 100111010001101?
00110011110100011101
308.Give examples of protocols found in the application layer
Hypertext Transfer Protocol (HTTP) - used for the webpages on the internet
Simple Mail Transfer Protocol (SMTP) - email transmission
Telecommunications Network - (TELNET) - terminal emulation to allow client access to telnet server
File Transfer Protocol (FTP) - facilitates transfer of files between any two machines
Domain Name System (DNS) - domain name translation
Dynamic Host Configuration Protocol (DHCP) - allocates IP addresses, subnet masks and gateways to hosts
Simple Network Management Protocol (SNMP) - gathers data of devices on the network
309.Give examples of protocols found in the network Layer
Internet Protocol (IP) - assists in routing packets from one machine to another
Internet Control Message Protocol (ICMP) - lets one know what is going such as error messages and debugging information
310.What is HSTS?
HTTP Strict Transport Security is a web server directive that informs user agents and web browsers how to handle its connection through a response header sent at the very beginning and back to the browser. This forces connections over HTTPS encryption, disregarding any script's call to load any resource in that domain over HTTP.
Read more [here]( https://www.globalsign.com/en/blog/what-is-hsts-and-how-do-i-use-it#:~:text=HTTP%20Strict%20Transport%20Security%20(HSTS,and%20back%20to%20the%20browser.)
311.What is the difference if any between SSL and TLS?
LINUX:
312.What is your experience with Linux?
Only you know :)
For example:
Administration
Troubleshooting & Debugging
Storage
Networking
Development
Deployments
313.Explain what each of the following commands does and give an example on how to use it:
touch
ls
rm
cat
cp
mkdir
314.Some of the commands in the previous question can be run with the -r/--recursive flag. What does it do?
315.Explain each field in the output of `ls -l` command
It shows a detailed list of files in a long format. From the left:
file permissions, number of links, owner name, owner group, file size, timestamp of last modification and directory/file name
316.What are hidden files/directories? How to list them?
These are files directly not displayed after performing a standard ls direct listing. An example of these files are .bashrc which are used to execute some scripts. Some also store configuration about services on your host like .KUBECONFIG. The command used to list them is,
ls -a
317.Explain what each of the following commands does and give an example on how to use it:
sed
grep
cut
awk
318.What each of the following commands does?
pwd
cd
find
ls
319.What each of the following commands does?
cd /
cd ~
cd
cd ..
cd .
cd -
cd / -> change to the root directory
cd ~ -> change to your home directory
cd -> change to your home directory
cd .. -> change to the directory above your current i.e parent directory
cd . -> change to the directory you currently in
cd - -> change to the last visited path
320.How to rename the name of a file or a directory?
Using the mv command.
321.Specify which command would you use (and how) for each of the following scenarios
Remove a directory with files
Display the content of a file
Provides access to the file /tmp/x for everyone
Change working directory to user home directory
Replace every occurrence of the word "good" with "great" in the file /tmp/y
322.How can you check what is the path of a certain command?
whereis
which
323.Explain redirection
324.Explain piping. How do you perform piping?
Using a pipe in Linux, allows you to send the output of one to another (also called redirection). For example: cat /etc/services | wc -l
325.Fix the following commands:
sed "s/1/2/g' /tmp/myFile
find . -iname *.yaml -exec sed -i "s/1/2/g" {} ;
sed 's/1/2/g' /tmp/myFile
find . -iname "*.yaml" -exec sed -i "s/1/2/g" {} \;
LINUX FHS
326.In Linux FHS (Filesystem Hierarchy Standard) what is the /?
327.What is stored in each of the following paths?
/bin, /sbin, /usr/bin and /usr/sbin
/etc
/home
/var
/tmp
328.What is special about the /tmp directory when compared to other directories?
329.What kind of information one can find in /proc?
330.Can you create files in /proc?
331.In which path can you find the system devices (e.g. block storage)?
332.Running the command df you get "command not found". What could be wrong and how to fix it?
Most likely the default/generated $PATH was somehow modified or overridden thus not containing /bin/ where df would normally go. This issue could also happen if bash_profile or any configuration file of your interpreter was wrongly modified, causing erratics behaviours. You would solve this by fixing your $PATH variable:
As to fix it there are several options:
Manually adding what you need to your $PATH PATH="$PATH":/user/bin:/..etc
You have your weird env variables backed up.
You would look for your distro default $PATH variable, copy paste using method #1
333.How do you schedule tasks periodically?
You can use the commands cron and at. With cron, tasks are scheduled using the following format:
*/30 * * * * bash myscript.sh Executes the script every 30 minutes.
The tasks are stored in a cron file, you can write in it using crontab -e
Alternatively if you are using a distro with systemd it's recommended to use systemd timers.
334.How to check which commands you executed in the past?
history command or .bash_history file
lINUX Permissions:
335.How to change the permissions of a file?
Using the chmod command.
336.What does the following permissions mean?:
777
644
750
777 - You give the owner, group and other: Execute (1), Write (2) and Read (4); 4+2+1 = 7.
644 - Owner has Read (4), Write (2), 4+2 = 6; Group and Other have Read (4).
750 - Owner has x+r+w, Group has Read (4) and Execute (1); 4+1 = 5. Other have no permissions.
337.What this command does? chmod +x some_file
It adds execute permissions to all sets i.e user, group and others
338.Explain what is setgid and setuid
setuid is a linux file permission that permits a user to run a file or program with the permissions of the owner of that file. This is possible by elevation of current user privileges.
setgid is a process when executed will run as the group that owns the file.
339.What is the purpose of sticky bit?
Its a bit that only allows the owner or the root user to delete or modify the file.
340.What the following commands do?
chmod
chown
chgrp
chmod - changes access permissions to files system objects
chown - changes the owner of file system files and directories
chgrp - changes the group associated with a file system object
341.What is sudo? How do you set it up?
342.True or False? In order to install packages on the system one must be the root user or use the sudo command
True
343.Explain what are ACLs. For what use cases would you recommend to use them?
344.You try to create a file but it fails. Name at least three different reason as to why it could happen
No more disk space
No more inodes
No permissions
LINUX Shell Scripting:
345.What this line in scripts mean?: #!/bin/bash
#!/bin/bash is She-bang
/bin/bash is the most common shell used as default shell for user login of the linux system. The shell’s name is an acronym for Bourne-again shell. Bash can execute the vast majority of scripts and thus is widely used because it has more features, is well developed and better syntax.
346.True or False?: when a certain command/line fails, the script, by default, will exit and will no keep running
Depends on the language and settings used. When a script written in Bash fails to run a certain command it will keep running and will execute all other commands mentioned after the command which failed. Most of the time we would actually want the opposite to happen. In order to make Bash exist when a specific command fails, use 'set -e' in your script.
347.Explain what would be the result of each command:
echo $0
echo $?
echo $$
echo $@
echo $#
348.How do you debug shell scripts?
Answer depends on the language you are using for writing your scripts. If Bash is used for example then:
Adding -x to the script I'm running in Bash
Old good way of adding echo statements
If Python, then using pdb is very useful.
349.How do you get input from the user in shell scripts?
Using the keyword read so for example read x will wait for user input and will store it in the variable x.
350.Explain continue and break. When do you use them if at all?
351.Running the following bash script, we don't get 2 as a result, why?
x = 2
echo $x
352.How to store the output of a command in a variable?
353.How do you check variable length?
354.Explain the following code:
:(){ :|:& };:
355.Can you give an example to some Bash best practices?
356.What is the ternary operator? How do you use it in bash?
A short way of using if/else. An example:
[[ $a = 1 ]] && b="yes, equal" || b="nope"
357.What does the following code do and when would you use it?
diff <(ls /tmp) <(ls /var/tmp)
It is called 'process substitution'. It provides a way to pass the output of a command to another command when using a pipe | is not possible. It can be used when a command does not support STDIN or you need the output of multiple commands. https://superuser.com/a/1060002/167769
LINUX SYSTEMD
358.What is systemd?
Systemd is a daemon (System 'd', d stands from daemon).
A daemon is a program that runs in the background without direct control of the user, although the user can at any time talk to the daemon.
systemd has many features such as user processes control/tracking, snapshot support, inhibitor locks..
If we visualize the unix/linux system in layers, systemd would fall directly after the linux kernel.
Hardware -> Kernel -> Daemons, System Libraries, Server Display.
359.On a system which uses systemd, how would you display the logs?
journalctl
360.Describe how to make a certain process/app a service
LINUX Debugging:
361.Where system logs are located?
/var/log
362.How to follow file's content as it being appended without opening the file every time?
tail -f <file_name>
363.What are you using for troubleshooting and debugging network issues?
dstat -t is great for identifying network and disk issues. netstat -tnlaup can be used to see which processes are running on which ports. lsof -i -P can be used for the same purpose as netstat. ngrep -d any metafilter for matching regex against payloads of packets. tcpdump for capturing packets wireshark same concept as tcpdump but with GUI (optional).
364.What are you using for troubleshooting and debugging disk & file system issues?
dstat -t is great for identifying network and disk issues. opensnoop can be used to see which files are being opened on the system (in real time).
365.What are you using for troubleshooting and debugging process issues?
strace is great for understanding what your program does. It prints every system call your program executed.
366.What are you using for debugging CPU related issues?
top will show you how much CPU percentage each process consumes perf is a great choice for sampling profiler and in general, figuring out what your CPU cycles are "wasted" on flamegraphs is great for CPU consumption visualization ( http://www.brendangregg.com/flamegraphs.html)
367.You get a call from someone claiming "my system is SLOW". What do you do?
Check with top for anything unusual
Run dstat -t to check if it's related to disk or network.
Check if it's network related with sar
Check I/O stats with iostat
368.Explain iostat output
369.How to debug binaries?
370.What is the difference between CPU load and utilization?
371.How you measure time execution of a program?
LINUX KERNEL
372.What is a kernel, and what does it do?
The kernel is part of the operating system and is responsible for tasks like:
Allocating memory
Schedule processes
Control CPU
373.How do you find out which Kernel version your system is using?
uname -a command
374.What is a Linux kernel module and how do you load a new module?
375.Explain user space vs. kernel space
The operating system executes the kernel in protected memory to prevent anyone from changing (and risking it crashing). This is what is known as "Kernel space". "User space" is where users executes their commands or applications. It's important to create this separation since we can't rely on user applications to not tamper with the kernel, causing it to crash.
Applications can access system resources and indirectly the kernel space by making what is called "system calls".
376.What are system calls? What system calls are you familiar with?
linux Virtulaization:
377.What virtualization solutions are available for Linux?
378.What is KVM?
LINUX SSH
379.What is SSH? How to check if a Linux server is running SSH?
Wikipedia Definition: "SSH or Secure Shell is a cryptographic network protocol for operating network services securely over an unsecured network."
Hostinger.com Definition: "SSH, or Secure Shell, is a remote administration protocol that allows users to control and modify their remote servers over the Internet."
An SSH server will have SSH daemon running. Depends on the distribution, you should be able to check whether the service is running (e.g. systemctl status sshd).
380.Why SSH is considered better than telnet?
Telnet also allows you to connect to a remote host but as opposed to SSH where the communication is encrypted, in telnet, the data is sent in clear text, so it doesn't considered to be secured because anyone on the network can see what exactly is sent, including passwords.
381.What is stored in ~/.ssh/known_hosts?
382.You try to ssh to a server and you get "Host key verification failed". What does it mean?
It means that the key of the remote host was changed and doesn't match the one that stored on the machine (in ~/.ssh/known_hosts).
383.What is the difference between SSH and SSL?
384.What ssh-keygen is used for?
385.What is SSH port forwarding?
LINUX Globbing, Wildcards
386.What is Globbing?
387.What are wildcards? Can you give an example of how to use them?
388.Explain what will ls [XYZ] match
389.Explain what will ls [^XYZ] match
390.Explain what will ls [0-5] match
391.What each of the following matches
?
*
The ? matches any single character
The * matches zero or more characters
392.What do we grep for in each of the following commands?:
grep '[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}' some_file
grep -E "error|failure" some_file
grep '[0-9]$' some_file
An IP address
The word "error" or "failure"
Lines which end with a number
393.Which line numbers will be printed when running `grep '\baaa\b'` on the following content:
aaa bbb ccc.aaa aaaaaa
lines 1 and 3.
394.What is the difference single and double quotes?
395.What is escaping? What escape character is used for escaping?
396.What is an exit code? What exit codes are you familiar with?
An exit code (or return code) represents the code returned by a child process to its parent process.
0 is an exit code which represents success while anything higher than 1 represents error. Each number has different meaning, based on how the application was developed.
I consider this as a good blog post to read more about it: https://shapeshed.com/unix-exit-codes
Linux Boot Process
397.Tell me everything you know about the Linux boot process
Another way to ask this: what happens from the moment you turned on the server until you get a prompt
398.What is GRUB2?
399.What is Secure Boot?
400.What can you find in /boot?
LINUX Disk and Filesystem
401:What's an inode?
For each file (and directory) in Linux there is an inode, a data structure which stores meta data related to the file like its size, owner, permissions, etc.
402.Which of the following is not included in inode:
Link count
File size
File name
File timestamp
403.How to check which disks are currently mounted?
Run mount
404.You run the mount command but you get no output. How would you check what mounts you have on your system?
cat /proc/mounts
405.What is the difference between a soft link and hard link?
Hard link is the same file, using the same inode. Soft link is a shortcut to another file, using a different inode.
406.True or False? You can create an hard link for a directory
False
407.True or False? You can create a soft link between different filesystems
True
408.What happens when you delete the original file in case of soft link and hard link?
409.Can you check what type of filesystem is used in /home?
There are many answers for this question. One way is running df -T
410.What is a swap partition? What is it used for?
411.How to create a
new empty file
a file with text (without using text editor)
a file with given size
412.You are trying to create a new file but you get "File system is full". You check with df for free space and you see you used only 20% of the space. What could be the problem?
413.How would you check what is the size of a certain directory?
du -sh
414.What is LVM?
415.Explain the following in regards to LVM:
PV
VG
LV
416.What is NFS? What is it used for?
417.What RAID is used for? Can you explain the differences between RAID 0, 1, 5 and 10?
418.Describe the process of extending a filesystem disk space
419.What is lazy umount?
420.What is tmpfs?
421.What is stored in each of the following logs?
/var/log/messages
/var/log/boot.log
422.True or False? both /tmp and /var/tmp cleared upon system boot
False. /tmp is cleared upon system boot while /var/tmp is cleared every a couple of days or not cleared at all (depends on distro).
Linux Performance Analysis
423.How to check what is the current load average?
One can use uptime or top
424.You know how to see the load average, great. but what each part of it means? for example 1.43, 2.34, 2.78
This article summarizes the load average topic in a great way
425.How to check process usage?
pidstat
426.How to check disk I/O?
iostat -xz 1
427.How to check how much free memory a system has? How to check memory consumption by each process?
You can use the commands top and free
428.How to check TCP stats?
sar -n TCP,ETCP 1
LINUX Processes
429. how to list all the processes running in your system?
ps -ef
430.How to run a process in the background and why to do that in the first place?
You can achieve that by specifying & at the end of the command. As to why, since some commands/processes can take a lot of time to finish execution or run forever, you may want to run them in the background instead of waiting for them to finish before gaining control again in current session.
431.How can you find how much memory a specific process consumes?
432.What signal is used by default when you run 'kill *process id*'?
The default signal is SIGTERM (15). This signal kills
process gracefully which means it allows it to save current
state configuration.
433.What signals are you familiar with?
SIGTERM - default signal for terminating a process SIGHUP - common usage is for reloading configuration SIGKILL - a signal which cannot caught or ignored
To view all available signals run kill -l
434.What kill 0 does?
435.What kill -0 does?
436.What is a trap?
437.Every couple of days, a certain process stops running. How can you look into why it's happening?
438.What happens when you press ctrl + c?
439.What is a Daemon in Linux?
A background process. Most of these processes are waiting for requests or set of conditions to be met before actually running anything. Some examples: sshd, crond, rpcbind.
440.What are the possible states of a process in Linux?
Running (R)
Uninterruptible Sleep (D) - The process is waiting for I/O
Interruptible Sleep (S)
Stopped (T)
Dead (x)
Zombie (z)
441.How do you kill a process in D state?
442.What is a zombie process?
A process which has finished to run but has not exited.
One reason it happens is when a parent process is programmed incorrectly. Every parent process should execute wait() to get the exit code from the child process which finished to run. But when the parent isn't checking for the child exit code, the child process can still exists although it finished to run.
443.How to get rid of zombie processes?
You can't kill a zombie process the regular way with kill -9 for example as it's already dead.
One way to kill zombie process is by sending SIGCHLD to the parent process telling it to terminate its child processes. This might not work if the parent process wasn't programmed properly. The invocation is kill -s SIGCHLD [parent_pid]
You can also try closing/terminating the parent process. This will make the zombie process a child of init (1) which does periodic cleanups and will at some point clean up the zombie process.
444.How to find all the
Processes executed/owned by a certain user
Process which are Java processes
Zombie Processes
If you mention at any point ps command with arugments, be familiar with what these arguments does exactly.
445.What is the init process?
It is the first process executed by the kernel during the booting of a system. It is a daemon process which runs till the system is shutdown. That is why, it is the parent of all the processes
446.Can you describe how processes are being created?
447.How to change the priority of a process? Why would you want to do that?
448.Can you explain how network process/connection is established and how it's terminated?>
449.What strace does? What about ltrace?
450.Find all the files which end with '.yml' and replace the number 1 in 2 in each file
find /some_dir -iname *.yml -print0 | xargs -0 -r sed -i "s/1/2/g"
451.You run ls and you get "/lib/ld-linux-armhf.so.3 no such file or directory". What is the problem?
The ls executable is built for an incompatible architecture.
452.How would you split a 50 lines file into 2 files of 25 lines each?
You can use the split command this way: split -l 25 some_file
453.What is a file descriptor? What file descriptors are you familiar with?
Kerberos File descriptor, also known as file handler, is a unique number which identifies an open file in the operating system.
In Linux (and Unix) the first three file descriptors are:
0 - the default data stream for input
1 - the default data stream for output
2 - the default data stream for output related to errors
454.What is NTP? What is it used for?
455.Explain Kernel OOM
Linux Security
456.What is chroot? In what scenarios would you consider using it?
457.What is SELiunx?
458.What is Kerberos?
459.What is nftables?
460.What firewalld daemon is responsible for?
461.Do you have experience with hardening servers? Can you describe the process?
Linux Networking
462.How to list all the interfaces?
463.What is the loopback (lo) interface?
464.What the following commands are used for?
ip addr
ip route
ip link
ping
netstat
traceroute
465.What is a network namespace? What is it used for?
466.How to check if a certain port is being used?
One of the following would work:
netstat -tnlp | grep <port_number>
lsof -i -n -P | grep <port_number>
467.How can you turn your Linux server into a router?
468.What is a virtual IP? In what situation would you use it?
469.True or False? The MAC address of an interface is assigned/set by the OS
False
470.Can you have more than one default gateway in a given system?
Technically, yes.
471.Which port is used in each of the following protocols?:
SSH
SMTP
HTTP
DNS
HTTPS
SSH - 22
SMTP - 35
HTTP - 80
DNS - 53
HTTPS - 443
472.What is telnet and why is it a bad idea to use it in production? (or at all)
473.What is the routing table? How do you view it?
474.How can you send an HTTP request from your shell?
Using nc is one way
475.What are packet sniffers? Have you used one in the past? If yes, which packet sniffers have you used and for what purpose?
It is a network utility that analyses and may inject tasks into the data-stream travelling over the targeted network.
476.How to list active connections?
477.How to trigger neighbor discovery in IPv6?
One way would be ping6 ff02::1
478.What is network interface bonding and do you know how it's performed in Linux?
479.What network bonding modes are there?
There a couple of modes:
balance-rr: round robing bonding
active-backup: a fault tolerance mode where only one is active
balance-tlb: Adaptive transmit load balancing
balance-alb: Adaptive load balancing
480.What is a bridge? How it's added in Linux OS?
LINUX DNS
481.How to check what is the hostname of the system?
cat /etc/hostname
You can also run hostnamectl or hostname but that might print only a temporary hostname. The one in the file is the permanent one.
482.What the file /etc/resolv.conf is used for? What does it include?
483.You can specify one or more of the following:
dig
host
nslookup
LINUX Packaging
484.Do you have experience with packaging? (as in building packages) Can you explain how does it works?
485.How packages installation/removal is performed on the distribution you are using?
The answer depends on the distribution being used.
In Fedora/CentOS/RHEL/Rocky it can be done with rpm or dnf commands. In Ubuntu it can be done with the apt command.
486.RPM: explain the spec format (what it should and can include)
487.How do you list the content of a package without actually installing it?
488.How to know to which package a file on the system belongs to? Is it a problem if it doesn't belongs to any package?
489.Where repositories are stored? (based on the distribution you are using)
490.What is an archive? How do you create one in Linux?
491.How to extract the content of an archive?
492.Why do we need package managers? Why not simply creating archives and publish them?
Package managers allow you to manage packages lifecycle as in installing, removing and updating the packages.
In addition, you can specify in a spec how a certain package will be installed - where to copy the files, which commands to run prior to the installation, post the installation, etc.
LINUX DNF
493.How to look for a package that provides the command /usr/bin/git? (the package isn't necessarily installed)
dnf provides /usr/bin/git
Linux App and Services
494.What can you find in /etc/services?
495.How to make sure a Service starts automatically after a reboot or crash?
Depends on the init system.
Systemd: systemctl enable [service_name] System V: update-rc.d [service_name] and add this line id:5678:respawn:/bin/sh /path/to/app to /etc/inittab Upstart: add Upstart init script at /etc/init/service.conf
496.You run ssh 127.0.0.1 but it fails with "connection refused". What could be the problem?
SSH server is not installed
SSH server is not running
497.How to print the shared libraries required by a certain program? What is it useful for?
498.What is CUPS?
499.What types of web servers are you familiar with?
LINUX Users and Groups
500.What is a "superuser" (or root user)? How is it different from regular users?
501.How do you create users? Where user information is stored?
502.Which file stores information about groups?
503.How do you change/set the password of a user?
504.Which file stores users passwords? Is it visible for everyone?
505.Do you know how to create a new user without using adduser/useradd command?
506.What information is stored in /etc/passwd? explain each field
507.How to add a new user to the system without providing him the ability to log-in into the system?
adduser user_name --shell=/bin/false --no-create-home You can also add a user and then edit /etc/passwd.
508.How to switch to another user? How to switch to the root user?
su command. Use su - to switch to root
509.What is the UID the root user? What about a regular user?
510.What can you do if you lost/forogt the root password?
Re-install the OS IS NOT the right answer :)
511.What is /etc/skel?
512.How to see a list of who logged-in to the system?
Using the last command.
513.Explain what each of the following commands does:
useradd
usermod
whoami
id
LINUX Hardware
514.Where can you find information on the processor?
/proc/cpuinfo
515.How can you print information on the BIOS, motherboard, processor and RAM?
dmidecoode
516.How can you print all the information on connected block devices in your system?
lsblk
LINUX Random
517.Give 5 commands which are two letters long
ls, wc, dd, df, du, ps, ip, cp, cd ...
518.What ways are there for creating a new empty file?
touch new_file
echo "" > new_file
519.How `cd -` works? How does it knows the previous location?
$OLDPWD
520.List three ways to print all the files in the current directory
ls
find .
echo *
521.How to count the number of lines in a file? What about words
522.You define x=2 in /etc/bashrc and x=6 ~/.bashrc you then login to the system. What would be the value of x?
523.What is the difference between man and info?
A good answer can be found here
524.Explain "environment variables". How do you list all environment variables?
525.How to create your own environment variables?
X=2 for example. But this will persist to new shells. To have it in new shells as well, use export X=2
526.What a double dash (--) mean?
It's used in commands to mark the end of commands options. One common example is when used with git to discard local changes: git checkout -- some_file
LINUX-AWK
527.What the awk command does? Have you used it? What for?
From Wikipedia: "AWK is domain-specific language designed for text processing and typically used as a data extraction and reporting tool"
528.How to print the 4th column in a file?
awk '{print $4}' file
529.How to print every line that is longer than 79 characters?
awk 'length($0) > 79' file
530.What the lsof command does? Have you used it? What for?
531.What is the difference between find and locate?
SYSTEM CALLS
532.Explain the fork() system call
fork() is used for creating a new process. It does so by cloning the calling process but the child process has its own PID and any memory locks, I/O operations and semaphores are not inherited.
533.Explain the exec() system call
534.What system call is used for listing files?
535.What system call is used for creating a new process?
536.What are the differences between exec() and fork()?
537.Why do we need the wait system call?
wait() is used by a parent process to wait for the child process to finish execution. If wait is not used by a parent process then a child process might become a zombie process.
538.What execve() does?
Executes a program. The program is passed as a filename (or path) and must be a binary executable or a script.
539.What is the return value of malloc?
540.Explain the pipe() system call. What does it used for?
Unix pipe implementation
"Pipes provide a unidirectional interprocess communication channel. A pipe has a read end and a write end. Data written to the write end of a pipe can be read from the read end of the pipe. A pipe is created using pipe(2), which returns two file descriptors, one referring to the read end of the pipe, the other referring to the write end."
541.What happens when you execute ls -l?
Shell reads the input using getline() which reads the input file stream and stores into a buffer as a string
The buffer is broken down into tokens and stored in an array this way: {"ls", "-l", "NULL"}
Shell checks if an expansion is required (in case of ls *.c)
Once the program in memory, its execution starts. First by calling readdir()
Notes:
getline() originates in GNU C library and used to read lines from input stream and stores those lines in the buffer
542.What happens when you execute ls -l *.log?
543.What readdir() system call does?
544.What exactly the command alias x=y does?
linux Filesystem and Files
545.How to create a file of a certain size?
There are a couple of ways to do that:
dd if=/dev/urandom of=new_file.txt bs=2MB count=1
truncate -s 2M new_file.txt
fallocate -l 2097152 new_file.txt
546.What does the following block do?:
open("/my/file") = 5
read(5, "file content")
These system calls are reading the file /my/file and 5 is the file descriptor number.
547.Describe three different ways to remove a file (or its content)
548.What is the difference between a process and a thread?
549.What is context switch?
From wikipedia: a context switch is the process of storing the state of a process or thread, so that it can be restored and resume execution at a later point
550.You found there is a server with high CPU load but you didn't find a process with high CPU. How is that possible?
LINUX Advanced- NETWORKING
551.When you run ip a you see there is a device called 'lo'. What is it and why do we need it?
552.What the traceroute command does? How does it works?
Another common way to task this questions is "what part of the tcp header does traceroute modify?"
553.What is network bonding? What types are you familiar with?
554.How to link two separate network namespaces so you can ping an interface on one namespace from the second one?
555.What are cgroups?
556.Explain Process Descriptor and Task Structure
557.What are the differences between threads and processes?
558.Explain Kernel Threads
559.What happens when socket system call is used?
This is a good article about the topic: https://ops.tips/blog/how-linux-creates-sockets
560.You executed a script and while still running, it got accidentally removed. Is it possible to restore the script while it's still running?
linux Memory:
561.What is the difference between MemFree and MemAvailable in /proc/meminfo?
MemFree - The amount of unused physical RAM in your system MemAvailable - The amount of available memory for new workloads (without pushing system to use swap) based on MemFree, Active(file), Inactive(file), and SReclaimable.
562.What is the difference between paging and swapping?
563.Explain what is OOM killer
Distribution
564.What is a Linux distribution?
565.What Linux distributions are you familiar with?
566.What are the components of a Linux distribution?
Kernel
Utilities
Services
Software/Packages Management
Linux Misc
567.Wildcards are implemented on user or kernel space?
568.If I plug a new device into a Linux machine, where on the system, a new device entry/file will be created?
/dev
569.Why there are different sections in man? What is the difference between the sections?
570.What is User-mode Linux?
Linux Nerds
571.Under which license Linux is distributed?
GPL v2
Linux Master App
OS
572.What is an operating system?
There are many ways to answer that. For those who look for simplicity, the book "Operating Systems: Three Easy Pieces" offers nice version:
"responsible for making it easy to run programs (even allowing you to seemingly run many at the same time), allowing programs to share memory, enabling programs to interact with devices, and other fun stuff like that
573.What is "virtual memory" and what purpose it serves?
574.What is demand paging?
575.What is copy-on-write or shadowing?
576.The kernel is part of the operating system and is responsible for tasks like:
Allocating memory
Schedule processes
Control CPU
577.True or False? Some pieces of the code in the kernel are loaded into protected areas of the memory so applications can't overwritten them
True
578.What is POSIX?
Processes
579.Can you explain what is a process?
A process is a running program. A program is one or more instructions and the program (or process) is executed by the operating system.
580.If you had to design an API for processes in an operating system, what would this API look like?
It would support the following:
Create - allow to create new processes
Delete - allow to remove/destroy processes
State - allow to check the state of the process, whether it's running, stopped, waiting, etc.
Stop - allow to stop a running process
581.How a process is created?
The OS is reading program's code and any additional relevant data
Program's bytes are loaded into the memory or more specifically, into the address space of the process.
Memory is allocated for program's stack (aka run-time stack). The stack also initialized by the OS with data like argv, argc and parameters to main()
Memory is allocated for program's heap which is required for data structures like linked lists and hash tables
I/O initialization tasks are performed, like in Unix/Linux based systems where each process has 3 file descriptors (input, output and error)
OS is running the program, starting from main()
Note: The loading of the program's code into the memory done lazily which means the OS loads only partial relevant pieces required for the process to run and not the entire code.
582.True or False? The loading of the program into the memory is done eagerly (all at once)
False. It was true in the past but today's operating systems perform lazy loading which means only the relevant pieces required for the process to run are loaded first.
583.What are different states of a process?
Running - it's executing instructions
Ready - it's ready to run but for different reasons it's on hold
Blocked - it's waiting for some operation to complete. For example I/O disk request
584.What is Inter Process Communication (IPC)?
Concurrency
585.Explain what is Semaphore and what its role in operating system
586.What is cache? What is buffer?
Buffer: Reserved place in RAM which is used to hold data for temporary purposes Cache: Cache is usually used when processes reading and writing to the disk to make the process faster by making similar data used by different programs easily accessible.
Virtualization:
587.Explain what is Virtualization
588.What is a hypervisor?
Red Hat: "A hypervisor is software that creates and runs virtual machines (VMs). A hypervisor, sometimes called a virtual machine monitor (VMM), isolates the hypervisor operating system and resources from the virtual machines and enables the creation and management of those VMs."
589.What types of hypervisors are there?
Hosted hypervisors and bare-metal hypervisors.
590.What are the advantages and disadvantges of bare-metal hypervisor over a hosted hypervisor?
Due to having its own drivers and a direct access to hardware components, a baremetal hypervisor will often have better performances along with stability and scalability.
On the other hand, there will probably be some limitation regarding loading (any) drivers so a hosted hypervisor will usually benefit from having a better hardware compatibility.
591.What types of virtualization are there?
Operating system virtualization Network functions virtualization Desktop virtualization
592.Is containerization is a type of Virtualization?
Yes, it's a operating-system-level virtualization, where the kernel is shared and allows to use multiple isolated user-spaces instances.
593.What is "time sharing"?
Even when using a system with one physical CPU, it's possible to allow multiple users to work on it and run programs. This is possible with time sharing where computing resources are shared in a way it seems to the user the system has multiple CPUs but in fact it's simply one CPU shared by applying multiprogramming and multi-tasking.
594.What is "space sharing"?
Somewhat the opposite of time sharing. While in time sharing a resource is used for a while by one entity and then the same resource can be used by another resource, in space sharing the space is shared by multiple entities but in a way it's not being transfered between them.
It's used by one entity until this entity decides to get rid of it. Take for example storage. In storage, a file is your until you decide to delete it.
ANSIBLE
595.Describe each of the following components in Ansible, including the relationship between them:
Task
Module
Play
Playbook
Role
Task – a call to a specific Ansible module Module – the actual unit of code executed by Ansible on your own host or a remote host. Modules are indexed by category (database, file, network, …) and also referred to as task plugins.
Play – One or more tasks executed on a given host(s)
Playbook – One or more plays. Each play can be executed on the same or different hosts
Role – Ansible roles allows you to group resources based on certain functionality/service such that they can be easily reused. In a role, you have directories for variables, defaults, files, templates, handlers, tasks, and metadata. You can then use the role by simply specifying it in your playbook.
596.How Ansible is different from other Automation tools?
Ansible is:
Agentless
Minimal run requirements (Python & SSH) and simple to use
Default mode is "push" (it supports also pull)
Focus on simpleness and ease-of-use
597.True or False? Ansible follows the mutable infrastructure paradigm
True.
598.True or False? Ansible uses declarative style to describe the expected end state
False. It uses a procedural style.
599.What kind of automation you wouldn't do with Ansible and why?
While it's possible to provision resources with Ansible, some prefer to use tools that follow immutable infrastructure paradigm. Ansible doesn't saves state by default. So a task that creates 5 instances for example, when executed again will create additional 5 instances (unless additional check is implemented) while other tools will check if 5 instances exist. If only 4 exist, additional instance will be created.
600.What is an inventory file and how do you define one?
An inventory file defines hosts and/or groups of hosts on which Ansible tasks executed upon.
An example of inventory file:
192.168.1.2
192.168.1.3
192.168.1.4
[web_servers]
190.40.2.20
190.40.2.21
190.40.2.22
601.What is a dynamic inventory file? When you would use one?
A dynamic inventory file tracks hosts from one or more sources like cloud providers and CMDB systems.
You should use one when using external sources and especially when the hosts in your environment are being automatically
spun up and shut down, without you tracking every change in these sources.
602.How do you list all modules and how can you see details on a specific module?
Ansible online docs
ansible-doc -l for list of modules and ansible [module_name] for detailed information on a specific module
603.Write a task to create the directory ‘/tmp/new_directory’
- name: Create a new directory
file:
path: "/tmp/new_directory"
state: directory
604.You want to run Ansible playbook only on specific minor version of your OS, how would you achieve that?
605.What the "become" directive used for in Ansible?
606.What are facts? How to see all the facts of a certain host?
607.What would be the result of the following play?
---
- name: Print information about my host
hosts: localhost
gather_facts: 'no'
tasks:
- name: Print hostname
debug:
msg: "It's me, {{ ansible_hostname }}"
When given a written code, always inspect it thoroughly. If your answer is “this will fail” then you are right. We are using a fact (ansible_hostname), which is a gathered piece of information from the host we are running on. But in this case, we disabled facts gathering (gather_facts: no) so the variable would be undefined which will result in failure.
608.What would be the result of running the following task? How to fix it?
- hosts: localhost
tasks:
- name: Install zlib
package:
name: zlib
state: present
609.What would be the result of running the following task? How to fix it?
- hosts: localhost
tasks:
- name: Install zlib
package:
name: zlib
state: present
610.Which Ansible best practices are you familiar with?. Name at least three
611.Explain the directory layout of an Ansible role
612.What 'blocks' are used for in Ansible?
613.How do you handle errors in Ansible?
614.You would like to run a certain command if a task fails. How would you achieve that?
615.Write a playbook to install ‘zlib’ and ‘vim’ on all hosts if the file ‘/tmp/mario’ exists on the system.
---
- hosts: all
vars:
mario_file: /tmp/mario
package_list:
- 'zlib'
- 'vim'
tasks:
- name: Check for mario file
stat:
path: "{{ mario_file }}"
register: mario_f
- name: Install zlib and vim if mario file exists
become: "yes"
package:
name: "{{ item }}"
state: present
with_items: "{{ package_list }}"
when: mario_f
616.Write a single task that verifies all the files in files_list variable exist on the host
- name: Ensure all files exist
assert:
that:
- item.stat.exists
loop: "{{ files_list }}"
617.Write a playbook to deploy the file ‘/tmp/system_info’ on all hosts except for controllers group, with the following content
I'm <HOSTNAME> and my operating system is <OS>
Replace and with the actual data for the specific host you are running on
The playbook to deploy the system_info file
---
- name: Deploy /tmp/system_info file
hosts: all:!controllers
tasks:
- name: Deploy /tmp/system_info
template:
src: system_info.j2
dest: /tmp/system_info
The content of the system_info.j2 template
# {{ ansible_managed }}
I'm {{ ansible_hostname }} and my operating system is {{ ansible_distribution }
618.The variable 'whoami' defined in the following places:
role defaults -> whoami: mario
extra vars (variables you pass to Ansible CLI with -e) -> whoami: toad
host facts -> whoami: luigi
inventory variables (doesn’t matter which type) -> whoami: browser
According to variable precedence, which one will be used?
The right answer is ‘toad’.
Variable precedence is about how variables override each other when they set in different locations. If you didn’t experience it so far I’m sure at some point you will, which makes it a useful topic to be aware of.
In the context of our question, the order will be extra vars (always override any other variable) -> host facts -> inventory variables -> role defaults (the weakest).
Here is the order of precedence from least to greatest (the last listed variables winning prioritization):
command line values (eg “-u user”)
role defaults [1]
inventory file or script group vars [2]
inventory group_vars/all [3]
playbook group_vars/all [3]
inventory group_vars/* [3]
playbook group_vars/* [3]
inventory file or script host vars [2]
inventory host_vars/* [3]
playbook host_vars/* [3]
host facts / cached set_facts [4]
play vars
play vars_prompt
play vars_files
role vars (defined in role/vars/main.yml)
block vars (only for tasks in block)
task vars (only for the task)
include_vars
set_facts / registered vars
role (and include_role) params
include params
extra vars (always win precedence)
A full list can be found at PlayBook Variables . Also, note there is a significant difference between Ansible 1.x and 2.x.
619.Explain the Diffrence between Forks and Serial & Throttle.
Serial is like running the playbook for each host in turn, waiting for completion of the complete playbook before moving on to the next host. forks=1 means run the first task in a play on one host before running the same task on the next host, so the first task will be run for each host before the next task is touched. Default fork is 5 in ansible.
[defaults]
forks = 30
- hosts: webservers
serial: 1
tasks:
- name: ...
Ansible also supports throttle This keyword limits the number of workers up to the maximum set via the forks setting or serial. This can be useful in restricting tasks that may be CPU-intensive or interact with a rate-limiting API
tasks:
- command: /path/to/cpu_intensive_command
throttle: 1
620.What is ansible-pull? How is it different from how ansible-playbook works?
621.What is Ansible Vault?
622.Demonstrate each of the following with Ansible:
Conditionals
Loops
623.What are filters? Do you have experience with writing filters?
624.Write a filter to capitalize a string
def cap(self, string):
return string.capitalize()
625.You would like to run a task only if previous task changed anything. How would you achieve that?
626.What are callback plugins? What can you achieve by using callback plugins?
627.What is Ansible Collections?
628.File '/tmp/exercise' includes the following content
Goku = 9001
Vegeta = 5200
Trunks = 6000
Gotenks = 32
With one task, switch the content to:
Goku = 9001
Vegeta = 250
Trunks = 40
Gotenks = 32
- name: Change saiyans levels
lineinfile:
dest: /tmp/exercise
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
with_items:
- { regexp: '^Vegeta', line: 'Vegeta = 250' }
- { regexp: '^Trunks', line: 'Trunks = 40' }
...
629.How do you test your Ansible based projects?
630.What is Molecule? How does it works?
631.You run Ansibe tests and you get "idempotence test failed". What does it mean? Why idempotence is important?
TERRAFORM
632.What is Terraform
Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.
Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure. As the configuration changes, Terraform is able to determine what changed and create incremental execution plans which can be applied.
The infrastructure Terraform can manage includes low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc.
Examples work best to showcase Terraform. Please see the use cases.
The key features of Terraform are:
»Infrastructure as Code
Infrastructure is described using a high-level configuration syntax. This allows a blueprint of your datacenter to be versioned and treated as you would any other code. Additionally, infrastructure can be shared and re-used.
»Execution Plans
Terraform has a "planning" step where it generates an execution plan. The execution plan shows what Terraform will do when you call apply. This lets you avoid any surprises when Terraform manipulates infrastructure.
»Resource Graph
Terraform builds a graph of all your resources, and parallelizes the creation and modification of any non-dependent resources. Because of this, Terraform builds infrastructure as efficiently as possible, and operators get insight into dependencies in their infrastructure.
»Change Automation
Complex changesets can be applied to your infrastructure with minimal human interaction. With the previously mentioned execution plan and resource graph, you know exactly what Terraform will change and in what order, avoiding many possible human errors.
633.What benefits infrastructure-as-code has?
fully automated process of provisioning, modifying and deleting your infrastructure
version control for your infrastructure which allows you to quickly rollback to previous versions
validate infrastructure quality and stability with automated tests and code reviews
makes infrastructure tasks less repetitive
634.Why Terraform and not other technologies? (e.g. Ansible, Puppet, CloufFormation)
A common wrong answer is to say that Ansible and Puppet are configuration management tools and Terraform is a provisioning tool. While technically true, it doesn't mean Ansible and Puppet can't be used for provisioning infrastructure. Also, it doesn't explain why Terraform should be used over CloudFormation if at all.
The benefits of Terraform over the other tools:
It follows the immutable infrastructure approach which has benefits like avoiding a configuration drift over time
Ansible and Puppet are more procedural (you mention what to execute in each step) and Terraform is declarative since you describe the overall desired state and not per resource or task. You can give the example of going from 1 to 2 servers in each tool. In Terraform you specify 2, in Ansible and puppet you have to only provision 1 additional server so you need to explicitly make sure you provision only another one server.
635.True or False? Terraform follows the mutable infrastructure paradigm
False. Terraform follows immutable infrastructure paradigm.
636.True or False? Terraform uses declarative style to describe the expected end state
True
637.Explain what is "Terraform configuration"
A configuration is a root module along with a tree of child modules that are called as dependencies from the root module.
638.What is HCL?
HCL stands for Hashicorp Configuration Language. It is the language Hashicorp made to use as the configuration language for a number of its tools, including terraform.
639.Explain each of the following:
Provider
Resource
Provisioner
* Provider is any cloud based technology - github, aws, postgresql etc - which one can make an API call to with its unique terraform provider binary to provision available services and components.
* Resources are the services and components you provision on these platforms.
* Provisioner in terraform's lingo specifically refers to configuration tools like ansible or salt-stack which are used in combination with terraform to orchestrate a system.
640.What terraform.tfstate file is used for?
It keeps track of the IDs of created resources so that Terraform knows what it is managing.
641.How do you rename an existing resource?
terraform state mv
642.Explain what the following commands do:
terraform init
terraform plan
terraform validate
terraform apply
terraform init scans your code to figure which providers are you using and download them. terraform plan will let you see what terraform is about to do before actually doing it. terraform validate checks if configuration is syntactically valid and internally consistent within a directory. terraform apply will provision the resources specified in the .tf files.
643.How to write down a variable which changes by an external source or during terraform apply?
You use it this way: variable “my_var” {}
644.Give an example of several Terraform best practices
645.Explain how implicit and explicit dependencies work in Terraform
646.What is local-exec and remote-exec in the context of provisioners?
647.It's a resource which was successfully created but failed during provisioning. Terraform will fail and mark this resource as "tainted".
648.What terraform taint does?
terraform taint resource.id manually marks the resource as tainted in the state file. So when you run terraform apply the next time, the resource will be destroyed and recreated.
649.What types of variables are supported in Terraform?
string number bool list() set() map() object({<ATTR_NAME> = , ... }) tuple([, ...])
650.What is a data source? In what scenarios for example would need to use it?
Data sources lookup or compute values that can be used elsewhere in terraform configuration.
There are quite a few cases you might need to use them:
you want to reference resources not managed through terraform
you want to reference resources managed by a different terraform module
you want to cleanly compute a value with typechecking, such as with aws_iam_policy_document
651.What are output variables and what terraform output does?
Output variables are named values that are sourced from the attributes of a module. They are stored in terraform state, and can be used by other modules through remote_state
652.Explain Modules
653.What is the Terraform Registry?
654.Explain remote-exec and local-exec
655.Explain "Remote State". When would you use it and how?
Terraform generates a `terraform.tfstate` json file that describes components/service provisioned on the specified provider. Remote State stores this file in a remote storage media to enable collaboration amongst team.
656.Explain "State Locking"
State locking is a mechanism that blocks an operations against a specific state file from multiple callers so as to avoid conflicting operations from different team members. Once the first caller's operation's lock is released the other team member may go ahead to carryout his own operation. Nevertheless Terraform will first check the state file to see if the desired resource already exist and if not it goes ahead to create it.
657.What is the "Random" provider? What is it used for
The random provider aids in generating numeric or alphabetic characters to use as a prefix or suffix for a desired named identifier.
658.How do you test a terraform module?
Many examples are acceptable, but the most common answer would likely to be using the tool terratest, and to test that a module can be initialized, can create resources, and can destroy those resources cleanly.
659.Aside from .tfvars files or CLI arguments, how can you inject dependencies from other modules?
The built-in terraform way would be to use remote-state to lookup the outputs from other modules. It is also common in the community to use a tool called terragrunt to explicitly inject variables between modules.
Containers
660.What is a Container? What is it used for?
661.How are containers different from virtual machines (VMs)?
The primary difference between containers and VMs is that containers allow you to virtualize multiple workloads on the operating system while in the case of VMs the hardware is being virtualized to run multiple machines each with its own OS. You can also think about it as containers are for OS-level virtualization while VMs are for hardware virtualization.
Containers don't require an entire guest operating system as VMs. Containers share the system's kernel as opposed to VMs
It usually takes a few seconds to set up a container as opposed to VMs which can take minutes or at least more time than containers as there is an entire OS to boot and initialize as opposed to container where you mainly lunch the app itself
Containers are isolated from each other, but not as concretely as virtual machines. It is possible for a malicious user to break into the host OS from a container and vice versa.
662.In which scenarios would you use containers and in which you would prefer to use VMs?
You should choose VMs when:
you need run an application which requires all the resources and functionalities of an OS
you need full isolation and security
You should choose containers when:
you need a lightweight solution
Running multiple versions or instances of a single application
663.Explain Podman or Docker architecture
664.Describe in detail what happens when you run `podman/docker run hello-world`?
Docker CLI passes your request to Docker daemon. Docker daemon downloads the image from Docker Hub Docker daemon creates a new container by using the image it downloaded Docker daemon redirects output from container to Docker CLI which redirects it to the standard output
665.What are `dockerd, docker-containerd, docker-runc, docker-containerd-ctr, docker-containerd-shim` ?
dockerd - The Docker daemon itself. The highest level component in your list and also the only 'Docker' product listed. Provides all the nice UX features of Docker.
(docker-)containerd - Also a daemon, listening on a Unix socket, exposes gRPC endpoints. Handles all the low-level container management tasks, storage, image distribution, network attachment, etc...
(docker-)containerd-ctr - A lightweight CLI to directly communicate with containerd. Think of it as how 'docker' is to 'dockerd'.
(docker-)runc - A lightweight binary for actually running containers. Deals with the low-level interfacing with Linux capabilities like cgroups, namespaces, etc...
(docker-)containerd-shim - After runC actually runs the container, it exits (allowing us to not have any long-running processes responsible for our containers). The shim is the component which sits between containerd and runc to facilitate this.
666.Describe difference between cgroups and namespaces
cgroup: Control Groups provide a mechanism for aggregating/partitioning sets of tasks, and all their future children, into hierarchical groups with specialized behaviour. namespace: wraps a global system resource in an abstraction that makes it appear to the processes within the namespace that they have their own isolated instance of the global resource.
In short:
Cgroups = limits how much you can use; namespaces = limits what you can see (and therefore use)
Cgroups involve resource metering and limiting: memory CPU block I/O network
Namespaces provide processes with their own view of the system
Multiple namespaces: pid,net, mnt, uts, ipc, user
667.Describe in detail what happens when you run `docker pull image:tag`?
Docker CLI passes your request to Docker daemon. Dockerd Logs shows the process
docker.io/library/busybox:latest resolved to a manifestList object with 9 entries; looking for a unknown/amd64 match
found match for linux/amd64 with media type application/vnd.docker.distribution.manifest.v2+json, digest sha256:400ee2ed939df769d4681023810d2e4fb9479b8401d97003c710d0e20f7c49c6
pulling blob "sha256:61c5ed1cbdf8e801f3b73d906c61261ad916b2532d6756e7c4fbcacb975299fb Downloaded 61c5ed1cbdf8 to tempfile /var/lib/docker/tmp/GetImageBlob909736690
Applying tar in /var/lib/docker/overlay2/507df36fe373108f19df4b22a07d10de7800f33c9613acb139827ba2645444f7/diff" storage-driver=overlay2
Applied tar sha256:514c3a3e64d4ebf15f482c9e8909d130bcd53bcc452f0225b0a04744de7b8c43 to 507df36fe373108f19df4b22a07d10de7800f33c9613acb139827ba2645444f7, size: 1223534
668.How do you run a container?
podman run or docker run
669.What `podman commit` does?. When will you use it?
Create a new image from a container’s changes
670.How would you transfer data from one container into another?
671.What happens to data of the container when a container exists?
672.Explain what each of the following commands do:
docker run
docker rm
docker ps
docker pull
docker build
docker commit
673.How do you remove old, non running, containers?
To remove one or more Docker images use the docker container rm command followed by the ID of the containers you want to remove.
The docker system prune command will remove all stopped containers, all dangling images, and all unused networks
docker rm $(docker ps -a -q) - This command will delete all stopped containers. The command docker ps -a -q will return all existing container IDs and pass them to the rm command which will delete them. Any running containers will not be deleted.
DOCKERFILE
674.What is Dockerfile
Docker can build images automatically by reading the instructions from a Dockerfile. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image.
675.What is the difference between ADD and COPY in Dockerfile?
COPY takes in a src and destination. It only lets you copy in a local file or directory from your host (the machine building the Docker image) into the Docker image itself. ADD lets you do that too, but it also supports 2 other sources. First, you can use a URL instead of a local file / directory. Secondly, you can extract a tar file from the source directly into the destination. Although ADD and COPY are functionally similar, generally speaking, COPY is preferred. That’s because it’s more transparent than ADD. COPY only supports the basic copying of local files into the container, while ADD has some features (like local-only tar extraction and remote URL support) that are not immediately obvious.
676.What is the difference between CMD and RUN in Dockerfile?
RUN lets you execute commands inside of your Docker image. These commands get executed once at build time and get written into your Docker image as a new layer. CMD is the command the container executes by default when you launch the built image. A Dockerfile can only have one CMD. You could say that CMD is a Docker run-time operation, meaning it’s not something that gets executed at build time. It happens when you run an image. A running image is called a container.
677.Do you perform any checks or testing related to your Dockerfile?
A common answer to this is to use hadolint project which is a linter based on Dockerfile best practices.
678.Explain what is Docker compose and what is it used for
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.
For example, you can use it to set up ELK stack where the services are: elasticsearch, logstash and kibana. Each running in its own container.
679.Describe the process of using Docker Compose
Define the services you would like to run together in a docker-compose.yml file
Run docker-compose up to run the services
680.Explain Docker interlock
681.Where can you store Docker images?
682.What is Docker Hub?
683.What is the difference between Docker Hub and Docker cloud?
Docker Hub is a native Docker registry service which allows you to run pull and push commands to install and deploy Docker images from the Docker Hub.
Docker Cloud is built on top of the Docker Hub so Docker Cloud provides you with more options/features compared to Docker Hub. One example is Swarm management which means you can create new swarms in Docker Cloud.
684.What is Docker Repository?
685.Explain image layers
A Docker image is built up from a series of layers. Each layer represents an instruction in the image’s Dockerfile. Each layer except the very last one is read-only. Each layer is only a set of differences from the layer before it. The layers are stacked on top of each other. When you create a new container, you add a new writable layer on top of the underlying layers. This layer is often called the “container layer”. All changes made to the running container, such as writing new files, modifying existing files, and deleting files, are written to this thin writable container layer. The major difference between a container and an image is the top writable layer. All writes to the container that add new or modify existing data are stored in this writable layer. When the container is deleted, the writable layer is also deleted. The underlying image remains unchanged. Because each container has its own writable container layer, and all changes are stored in this container layer, multiple containers can share access to the same underlying image and yet have their own data state.
686.What best practices are you familiar related to working with containers?
687.How do you manage persistent storage in Docker?
688.How can you connect from the inside of your container to the localhost of your host, where the container runs?
689.How do you copy files from Docker container to the host and vice versa?
KUBERNETES
690.What is Kubernetes? Why organizations are using it?
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.
To understand what Kubernetes is good for, let's look at some examples:
You would like to run a certain application in a container on multiple different locations. Sure, if it's 2-3 servers/locations, you can do it by yourself but it can be challenging to scale it up to additional multiple location.
Performing updates and changes across hundreds of containers
Handle cases where the current load requires to scale up (or down)
691.What is a Kubernetes Cluster?
Red Hat Definition: "A Kubernetes cluster is a set of node machines for running containerized applications. If you’re running Kubernetes, you’re running a cluster.
At a minimum, a cluster contains a worker node and a master node."
KUBERNETES NODES
692.What is a Node?
A node is a virtual machine or a physical server that serves as a worker for running the applications. It's recommended to have at least 3 nodes in Kubernetes production environment.
693.What the master node is responsible for?
The master coordinates all the workflows in the cluster:
Scheduling applications
Managing desired state
Rolling out new updates
694.What do we need the worker nodes for?
The workers are the nodes which run the applications and workloads.
695.What is kubectl?
696.Which command you run to view your nodes?
kubectl get nodes
697.True or False? Every cluster must have 0 or more master nodes and at least on e worker
False. A Kubernetes cluster consists of at least 1 master and can have 0 workers (although that wouldn't be very useful...)
698.What are the components of the master node?
API Server - the Kubernetes API. All cluster components communicate through it
Scheduler - assigns an application with a worker node it can run on
Controller Manager - cluster maintenance (replications, node failures, etc.)
etcd - stores cluster configuration
699.What are the components of a worker node
Kubelet - an agent responsible for node communication with the master.
Kube-proxy - load balancing traffic between app components
Container runtime - the engine runs the containers (Podman, Docker, ...)
KUBERNETES POD
700.Explain what is a pod
701.Deploy a pod called "my-pod" using the nginx:alpine image
kubectl run my-pod --image=nginx:alpine --restart=Never
702.How many containers can a pod contain?
Multiple containers but in most cases it would be one container per pod.
703.What does it mean that "pods are ephemeral?
It means they would eventually die and pods are unable to heal so it is recommended that you don't create them directly.
704.Which command you run to view all pods running on all namespaces?
kubectl get pods --all-namespaces
705.How to delete a pod?
kubectl delete pod pod_name
KUBERNETES DEPLOYMENT
706.What is a "Deployment" in Kubernetes?
707.How to create a deployment?
cat << EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
EOF
708.How to edit a deployment?
kubectl edit deployment some-deployment
709.What happens after you edit a deployment and change the image?
The pod will terminate and another, new pod, will be created.
Also, when looking at the replicaset, you'll see the old replica doesn't have any pods and a new replicaset is created.
710.How to delete a deployment?
One way is by specifying the deployment name: kubectl delete deployment [deployment_name] Another way is using the deployment configuration file: kubectl delete -f deployment.yaml
711.What happens when you delete a deployment?
The pod related to the deployment will terminate and the replicaset will be removed.
712.How make an app accessible on private or external network?
Using a Service.
KUBERNETES SERVICE
713.What is a Service in Kubernetes?
"An abstract way to expose an application running on a set of Pods as a network service." - read more [here]( https://kubernetes.io/docs/concepts/services-networking/service)
In simpler words, it allows you to expose the service by attaching permanent IP address for example to a certain pod.
714.True or False? The lifecycle of Pods and Services isn't connected so when a pod dies, the service still stays
True
715.What Service types are there?
ClusterIP
NodePort
LoadBalancer
ExternalName
More on this topic here
716.How to get information on a certain service?
kubctl describe service [service_name]
717.How to verify that a certain service forwards the requests to a pod
Run kubectl describe service and if the IPs from "Endpoints" match any IPs from the output of kubectl get pod -o wide
718.What is the difference between an external and an internal service?
719.How to turn the following service into an external one?
spec:
selector:
app: some-app
ports:
- protocol: TCP
port: 8081
targetPort: 8081
Adding type: LoadBalancer and nodePort
spec:
selector:
app: some-app
type: LoadBalancer
ports:
- protocol: TCP
port: 8081
targetPort: 8081
nodePort: 32412
720.What would you use to route traffic from outside the Kubernetes cluster to services within a cluster?
Ingress
KUBERNETES INGRESS
721.What is Ingress?
From Kubernetes docs: "Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource."
722.Complete the following configuration file to make it Ingress
metadata:
name: someapp-ingress
spec:
There are several ways to answer this question.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: someapp-ingress
spec:
rules:
- host: my.host
http:
paths:
- backend:
serviceName: someapp-internal-service
servicePort: 8080
723.Explain the meaning of "http", "host" and "backend" directives
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: someapp-ingress
spec:
rules:
- host: my.host
http:
paths:
- backend:
serviceName: someapp-internal-service
servicePort: 8080
host is the entry point of the cluster so basically a valid domain address that maps to cluster's node IP address
the http line used for specifying that incoming requests will be forwarded to the internal service using http.
backend is referencing the internal service (serviceName is the name under metadata and servicePort is the port under the ports section).
What is Ingress Controller?
724.What is Ingress Controller?
An implementation for Ingress. It's basically another pod (or set of pods) that does evaluates and processes Ingress rules and this it manages all the redirections.
There are multiple Ingress Controller implementations (the one from Kubernetes is Kubernetes Nginx Ingress Controller).
725.What are some use cases for using Ingress?
Multiple sub-domains (multiple host entries, each with its own service)
One domain with multiple services (multiple paths where each one is mapped to a different service/application)
726.How to list Ingress in your namespace?
kubectl get ingress
727.What is Ingress Default Backend?
It specifies what do with an incoming request to the Kubernetes cluster that isn't mapped to any backend (= no rule to for mapping the request to a service). If the default backend service isn't defined, it's recommended to define so users still see some kind of message instead of nothing or unclear error.
728.How to configure a default backend?
Create Service resource that specifies the name of the default backend as reflected in kubectl desrcibe ingress ... and the port under the ports section.
729.How to configure TLS with Ingress?
Add tls and secretName entries.
spec:
tls:
- hosts:
- some_app.com
secretName: someapp-secret-tls
730.True or False? When configuring Ingress with TLS, the Secret component must be in the same namespace as the Ingress component
True
KUBERNETES CONFIG FILE
731.Which parts a configuration file has?
It has three main parts:
Metadata
Specification
Status (this automatically generated and added by Kubernetes)
732.What is the format of a configuration file?
YAML
733.How to get latest configuration of a deployment?
kubectl get deployment [deployment_name] -o yaml
734.Where Kubernetes gets the status data (which is added to the configuration file) from?
etcd
KUBERNETES ETCD
735.What is etcd?
736.True or False? Etcd holds the current status of any kubernetes component
True
737.True or False? The API server is the only component which communicates directly with etcd
True
738.True or False? application data is not stored in etcd
True
KUBERNETES NAMESPACES
739.What are namespaces?
Namespaces allow you split your cluster into virtual clusters where you can group your applications in a way that makes sense and is completely separated from the other groups (so you can for example create an app with the same name in two different namespaces)
740.Why to use namespaces? What is the problem with using one default namespace?
When using the default namespace alone, it becomes hard over time to get an overview of all the applications you manage in your cluster. Namespaces make it easier to organize the applications into groups that makes sense, like a namespace of all the monitoring applications and a namespace for all the security applications, etc.
Namespaces can also be useful for managing Blue/Green environments where each namespace can include a different version of an app and also share resources that are in other namespaces (namespaces like logging, monitoring, etc.).
Another use case for namespaces is one cluster, multiple teams. When multiple teams use the same cluster, they might end up stepping on each others toes. For example if they end up creating an app with the same name it means one of the teams overriden the app of the other team because there can't be too apps in Kubernetes with the same name (in the same namespace).
741.True or False? When a namespace is deleted all resources in that namespace are not deleted but moved to another default namespace
False. When a namespace is deleted, the resources in that namespace are deleted as well.
742.What special namespaces are there by default when creating a Kubernetes cluster?
default
kube-system
kube-public
kube-node-lease
743.What can you find in kube-system namespace?
Master and Kubectl processes
System processes
744.How to list all namespaces?
kubectl get namespaces
745.What kube-public contains?
A configmap, which contains cluster information
Publicely accessible data
746.How to get the name of the current namespace?
kubectl config view | grep namespace
747.What kube-node-lease contains?
It holds information on hearbeats of nodes. Each node gets an object which holds information about its availability.
748.How to create a namespace?
One way is by running kubectl create namespace [NAMESPACE_NAME]
Another way is by using namespace configuration file:
apiVersion: v1
kind: ConfigMap
metadata:
name: some-cofngimap
namespace: some-namespace
749.What default namespace contains?
Any resource you create while using Kubernetes.
750.True or False? With namespaces you can limit the resources consumed by the users/teams
True. With namespaces you can limit CPU, RAM and storage usage.
751.How to switch to another namespace? In other words how to change active namespace?
kubectl config set-context --current --namespace=some-namespace and validate with kubectl config view --minify | grep namespace:
OR
kubens some-namespace
752.What is Resource Quota?
753.How to create a Resource Quota?
kubectl create quota some-quota --hard-cpu=2,pods=2
754.Which resources are accessible from different namespaces?
Service.
755.Let's say you have three namespaces: x, y and z. In x namespace you have a ConfigMap referencing service in z namespace. Can you reference the ConfigMap in x namespace from y namespace?
No, you would have to create separate namespace in y namespace.
756.Which service and in which namespace the following file is referencing?
apiVersion: v1
kind: ConfigMap
metadata:
name: some-configmap
data:
some_url: samurai.jack
757.Which components can't be created within a namespace?
Volume and Node.
758.How to list all the components that bound to a namespace?
kubectl api-resources --namespaced=true
759.How to create components in a namespace?
One way is by specifying --namespace like this: kubectl apply -f my_component.yaml --namespace=some-namespace Another way is by specifying it in the YAML itself:
apiVersion: v1
kind: ConfigMap
metadata:
name: some-configmap
namespace: some-namespace
and you can verify with: kubectl get configmap -n some-namespace
KUBERNETES COMMANDS
760.What kubectl exec does?
761.What kubectl get all does?
762.What the command kubectl get pod does?
763.How to see all the components of a certain application?
kubectl get all | grep [APP_NAME]
764.What kubectl apply -f [file] does?
765.What the command kubectl api-resources --namespaced=false does?
766.How to print information on a specific pod?
kubectl describe pod pod_name
767.How to execute the command "ls" in an existing pod?
kubectl exec some-pod -it -- ls
768.How to create a service that exposes a deployment?
kubectl expose deploy some-deployment --port=80 --target-port=8080
769.How to create a pod and a service with one command?
kubectl run nginx --image=nginx --restart=Never --port 80 --expose
770.Describe in detail what the following command does kubectl create deployment kubernetes-httpd --image=httpd
771.Why to create kind deployment, if pods can be launched with replicaset?
772.How to scale a deployment to 8 replicas?
kubectl scale deploy some-deployment --replicas=8
773.How to get list of resources which are not in a namespace?
kubectl api-resources --namespaced=false
774.How to delete all pods whose status is not "Running"?
kubectl delete pods --field-selector=status.phase!='Running'
775.What kubectl logs [pod-name] command does?
776.What kubectl describe pod [pod name] does? command does?
777.How to display the resources usages of pods?
kubectl top pod
778.What kubectl get componentstatus does?
Outputs the status of each of the control plane components.
779.What is Minikube?
Minikube is a lightweight Kubernetes implementation. It create a local virtual machine and deploys a simple (single node) cluster.
780.How do you monitor your Kubernetes?
781.You suspect one of the pods is having issues, what do you do?
Start by inspecting the pods status. we can use the command kubectl get pods (--all-namespaces for pods in system namespace)
If we see "Error" status, we can keep debugging by running the command kubectl describe pod [name]. In case we still don't see anything useful we can try stern for log tailing.
In case we find out there was a temporary issue with the pod or the system, we can try restarting the pod with the following kubectl scale deployment [name] --replicas=0
Setting the replicas to 0 will shut down the process. Now start it with kubectl scale deployment [name] --replicas=1
782.What the Kubernetes Scheduler does?
783.What happens to running pods if if you stop Kubelet on the worker nodes?
784.What happens what pods are using too much memory? (more than its limit)
They become candidates to for termination
785.Describe how roll-back works
786.True or False? Memory is a compressible resource, meaning that when a container reach the memory limit, it will keep running
False. CPU is a compressible resource while memory is a non compressible resource - once a container reached the memory limit, it will be terminated.
787.What is the control loop? How it works?
kubernetes operator
788.
What is an Operator?
Explained here
"Operators are software extensions to Kubernetes that make use of custom resources to manage applications and their components. Operators follow Kubernetes principles, notably the control loop."
789.Why do we need Operators?
The process of managing stateful applications in Kubernetes isn't as straightforward as managing stateless applications where reaching the desired status and upgrades are both handled the same way for every replica. In stateful applications, upgrading each replica might require different handling due to the stateful nature of the app, each replica might be in a different status. As a result, we often need a human operator to manage stateful applications. Kubernetes Operator is suppose to assist with this.
This also help with automating a standard process on multiple Kubernetes clusters
790.What components the Operator consists of?
CRD (custom resource definition)
Controller - Custom control loop which runs against the CRD
791.How Operator works?
It uses the control loop used by Kubernetes in general. It watches for changes in the application state. The difference is that is uses a custom control loop. In additions.
In addition, it also makes use of CRD's (Custom Resources Definitions) so basically it extends Kubernetes API.
792.True or False? Kubernetes Operator used for stateful applications
True
793.What is the Operator Framework?
open source toolkit used to manage k8s native applications, called operators, in an automated and efficient way.
794.What components the Operator Framework consists of?
Operator SDK - allows developers to build operators
Operator Lifecycle Manager - helps to install, update and generally manage the lifecycle of all operators
Operator Metering - Enables usage reporting for operators that provide specialized services
795.Describe in detail what is the Operator Lifecycle Manager
It's part of the Operator Framework, used for managing the lifecycle of operators. It basically extends Kubernetes so a user can use a declarative way to manage operators (installation, upgrade, ...).
796.What openshift-operator-lifecycle-manager namespace includes?
It includes:
catalog-operator - Resolving and installing ClusterServiceVersions the resource they specify.
olm-operator - Deploys applications defined by ClusterServiceVersion resource
797.What is kubconfig? What do you use it for?
798.Can you use a Deployment for stateful applications?
799.Explain StatefulSet
Kubernetes Replicaset
800.What is the purpose of ReplicaSet?
801.How a ReplicaSet works?
802.What happens when a replica dies?
KUBERNETES SECRETS
803.Explain Kubernetes Secrets
Secrets let you store and manage sensitive information (passwords, ssh keys, etc.)
804.How to create a Secret from a key and value?
kubectl create secret generic some-secret --from-literal=password='donttellmypassword'
805.How to create a Secret from a file?
kubectl create secret generic some-secret --from-file=/some/file.txt
806.What type: Opaque in a secret file means? What other types are there?
Opaque is the default type used for key-value pairs.
807.True or False? storing data in a Secret component makes it automatically secured
False. Some known security mechanisms like "encryption" aren't enabled by default
808.What is the problem with the following Secret file:
apiVersion: v1
kind: Secret
metadata:
name: some-secret
type: Opaque
data:
password: mySecretPassword
Password isn't encrypted. You should run something like this: `echo -n 'mySecretPassword' | base64` and paste the result to the file instead of using plain-text.
809.How to create a Secret from a configuration file?
kubectl apply -f some-secret.yaml
KUBERNETES STORAGE
810.True or False? Kubernetes provides data persistence out of the box, so when you restart a pod, data is saved
811.Explain "Persistent Volumes". Why do we need it?
Persistent Volumes allow us to save data so basically they provide storage that doesn't depend on the pod lifecycle.
812.True or False? Persistent Volume must be available to all nodes because the pod can restart on any of them
True
813.What types of persistent volumes are there?
NFS
iSCSI
CephFS
...
814.What is PersistentVolumeClaim?
815.True or False? Kubernetes manages data persistence
False
816.Explain Storage Classes
817.Explain "Dynamic Provisioning" and "Static Provisioning"
818.Explain Access Modes
819.What is Reclaim Policy?
820.What is Reclaim Policy?
821.What reclaim policies are there?
Retain
Recycle
Delete
KUBERNETES ACCESS CONTROL
822.What is RBAC?
823.Explain the Role and RoleBinding" objects
824.What is the difference between Role and ClusterRole objects?
KUBERNETES MISC
825.Explain what Kubernetes Service Discovery means
826.You have one Kubernetes cluster and multiple teams that would like to use it. You would like to limit the resources each team consumes in the cluster. Which Kubernetes concept would you use for that?
Namespaces will allow to limit resources and also make sure there are no collisions between teams when working in the cluster (like creating an app with the same name).
827.What Kube Proxy does?
828.What "Resources Quotas" are used for and how?
829.Explain ConfigMap
Separate configuration from pods. It's good for cases where you might need to change configuration at some point but you don't want to restart the application or rebuild the image so you create a ConfigMap and connect it to a pod but externally to the pod.
Overall it's good for:
Sharing the same configuration between different pods
Storing external to the pod configuration
830.Create it (from key&value, a file or an env file)
Attach it. Mount a configmap as a volume
831.Trur or False? Sensitive data, like credentials, should be stored in a ConfigMap
False. Use secret.
832.Explain "Horizontal Pod Autoscaler"
Scale the number of pods automatically on observed CPU utilization.
833.When you delete a pod, is it deleted instantly? (a moment after running the command)
834.How to delete a pod instantly?
Use "--grace-period=0 --force"
835.Explain Liveness probe
836.Explain Readiness probe
837.What does being cloud-native mean?
838.Explain the pet and cattle approach of infrastructure with respect to kubernetes
839.Describe how you one proceeds to run a containerised web app in K8s, which should be reachable from a public URL.
840.How would you troubleshoot your cluster if some applications are not reachable any more?
841.Describe what CustomResourceDefinitions there are in the Kubernetes world? What they can be used for?
842.The control plane component kube-scheduler asks the following questions,
What to schedule? It tries to understand the pod-definition specifications
Which node to schedule? It tries to determine the best node with available resources to spin a pod
Binds the Pod to a given node
843.How are labels and selectors used?
844.Explain what is CronJob and what is it used for
845.What QoS classes are there?
Guaranteed
Burstable
BestEffort
846.Explain Labels. What are they and why would one use them?
847.Explain Selectors
848.What is Kubeconfig?
HELM
849.What is Helm?
Package manager for Kubernetes. Basically the ability to package YAML files and distribute them to other users.
850.Why do we need Helm? What would be the use case for using it?
Sometimes when you would like to deploy a certain application to your cluster, you need to create multiple YAML files / Components like: Secret, Service, ConfigMap, etc. This can be tedious task. So it would make sense to ease the process by introducing something that will allow us to share these bundle of YAMLs every time we would like to add an application to our cluster. This something is called Helm.
851.Explain "Helm Charts"
Helm Charts is a bundle of YAML files. A bundle that you can consume from repositories or create your own and publish it to the repositories.
852.It is said that Helm is also Templating Engine. What does it mean?
It is useful for scenarios where you have multiple applications and all are similar, so there are minor differences in their configuration files and most values are the same. With Helm you can define a common blueprint for all of them and the values that are not fixed and change can be placeholders. This is called a template file and it looks similar to the following
apiVersion: v1
kind: Pod
metadata:
name: {[ .Values.name ]}
spec:
containers:
- name: {{ .Values.container.name }}
image: {{ .Values.container.image }}
port: {{ .Values.container.port }}
The values themselves will in separate file:
name: some-app
container:
name: some-app-container
image: some-app-image
port: 1991
853.What are some use cases for using Helm template file?
Deploy the same application across multiple different environments
CI/CD
854.Explain the Helm Chart Directory Structure
someChart/ -> the name of the chart Chart.yaml -> meta information on the chart values.yaml -> values for template files charts/ -> chart dependencies templates/ -> templates files :)
855.How do you search for charts?
helm search hub [some_keyword]
856.Is it possible to override values in values.yaml file when installing a chart?
Yes. You can pass another values file: `helm install --values=override-values.yaml [CHART_NAME]`
Or directly on the command line: helm install --set some_key=some_value
857.How Helm supports release management?
Helm allows you to upgrade, remove and rollback to previous versions of charts. In version 2 of Helm it was with what is known as "Tiller". In version 3, it was removed due to security concerns.
ISTIO
858.What is Istio? What is it used for?
PROGRAMMING
859.What programming language do you prefer to use for DevOps related tasks? Why specifically this one?
860.What are static typed (or simply typed) languages?
In static typed languages the variable type is known at compile-time instead of at run-time. Such languages are: C, C++ and Java
861.Explain expressions and statements
An expression is anything that results in a value (even if the value is None). Basically, any sequence of literals so, you can say that a string, integer, list, ... are all expressions.
Statements are instructions executed by the interpreter like variable assignments, for loops and conditionals (if-else).
862.What is Object Oriented Programming? Why is it important?
863.Explain Composition
864.What is a compiler?
865.What is an interpreter?
866.SOLID design principles are about:
Make it easier to extend the functionality of the system
Make the code more readable and easier to maintain
SOLID is:
Single Responsibility - A class should only have a single responsibility
Open-Closed - An entity should be open for extension, but closed for modification. What this practically means is that you should extend functionality by adding a new code and not by modifying it. Your system should be separated into components so it can be easily extended without breaking everything.
Liskov Substitution - Any derived class should be able to substitute the its parent without altering its corrections. Practically, every part of the code will get the expected result no matter which part is using it
Interface segregation - A client should never depend on anything it doesn't uses
Dependency Inversion - High level modules should depend on abstractions, not low level modules
867.What is DRY? What is your opinion on it?
868.What are the four pillars of object oriented programming?
869.Explain recursion
870.Explain Inversion of Control
871.Explain Dependency Injection
872.True or False? In Dynamically typed languages the variable type is known at run-time instead of at compile-time
True
873.Explain what are design patterns and describe three of them in detail
874.Explain big O notation
875.What is "Duck Typing"?
876.Binary search:
How does it works?
Can you implement it? (in any language you prefer)
What is the average performance of the algorithm you wrote?
It's a search algorithm used with sorted arrays/lists to find a target value by dividing the array each iteration and comparing the middle value to the target value. If the middle value is smaller than target value, then the target value is searched in the right part of the divided array, else in the left side. This continues until the value is found (or the array divided max times)
877.What are your code-review best practices?
878.Do you agree/disagree with each of the following statements and why?:
The commit message is not important. When reviewing a change/patch one should focus on the actual change
You shouldn't test your code before submitting it. This is what CI/CD exists for.
879.In any language you want, write a function to determine if a given string is a palindrome
880.In any language you want, write a function to determine if two strings are Anagrams
881.In any language you would like, print the numbers from 1 to a given integer. For example for input: 5, the output is: 12345
882.Describe what would be the time complexity of the operations access, search insert and remove for the following data structures:
Stack
Queue
Linked List
Binary Search Tree
883.What is the complexity for the best, worst and average cases of each of the following algorithms?:
Quick sort
Merge sort
Bucket Sort
Radix Sort
884.Implement Stack in any language you would like
885.Tell me everything you know about Linked Lists
A linked list is a data structure
It consists of a collection of nodes. Together these nodes represent a sequence
Useful for use cases where you need to insert or remove an element from any position of the linked list
Some programming languages don't have linked lists as a built-in data type (like Python for example) but it can be easily implemented
886.Describe (no need to implement) how to detect a loop in a Linked List
There are multiple ways to detect a loop in a linked list. I'll mention three here:
Worst solution:
Two pointers where one points to the head and one points to the last node. Each time you advance the last pointer by one and check whether the distance between head pointer to the moved pointer is bigger than the last time you measured the same distance (if not, you have a loop).
The reason it's probably the worst solution, is because time complexity here is O(n^2)
Decent solution:
Create an hash table and start traversing the linked list. Every time you move, check whether the node you moved to is in the hash table. If it isn't, insert it to the hash table. If you do find at any point the node in the hash table, it means you have a loop. When you reach None/Null, it's the end and you can return "no loop" value. This one is very easy to implement (just create a hash table, update it and check whether the node is in the hash table every time you move to the next node) but since the auxiliary space is O(n) because you create a hash table then, it's not the best solution
Good solution:
Instead of creating a hash table to document which nodes in the linked list you have visited, as in the previous solution, you can modify the Linked List (or the Node to be precise) to have a "visited" attribute. Every time you visit a node, you set "visited" to True.
Time compleixty is O(n) and Auxiliary space is O(1), so it's a good solution but the only problem, is that you have to modify the Linked List.
Best solution:
You set two pointers to traverse the linked list from the beginning. You move one pointer by one each time and the other pointer by two. If at any point they meet, you have a loop. This solution is also called "Floyd's Cycle-Finding"
Time complexity is O(n) and auxiliary space is O(1). Perfect :)
887.Implement Hash table in any language you would like
888.What is Integer Overflow? How is it handled?
889.Name 3 design patterns. Do you know how to implement (= provide an example) these design pattern in any language you'll choose?
890.Given an array/list of integers, find 3 integers which are adding up to 0 (in any language you would like)
def find_triplets_sum_to_zero(li):
li = sorted(li)
for i, val in enumerate(li):
low, up = 0, len(li)-1
while low < i < up:
tmp = var + li[low] + li[up]
if tmp > 0:
up -= 1
elif tmp < 0:
low += 1
else:
yield li[low], val, li[up]
low += 1
up -= 1
PYTHON
891.What are some characteristics of the Python programming language?
1. It is a high level general purpose programming language created in 1991 by Guido Van Rosum.
2. The language is interpreted, being the CPython (Written in C) the most used/maintained implementation.
3. It is strongly typed. The typing discipline is duck typing and gradual.
4. Python focuses on readability and makes use of whitespaces/identation instead of brackets { }
5. The python package manager is called PIP "pip installs packages", having more than 200.000 available packages.
6. Python comes with pip installed and a big standard library that offers the programmer many precooked solutions.
7. In python **Everything** is an object.
892.What built-in types Python has?
List
Dictionary
Set
Numbers (int, float, ...)
String
Bool
Tuple
Frozenset
893.What is mutability? Which of the built-in types in Python are mutable?
Mutability determines whether you can modify an object of specific type.
The mutable data types are:
List
Dictionary
Set
The immutable data types are:
Numbers (int, float, ...)
String
Bool
Tuple
Frozenset
894.What is a tuple in Python? What is it used for?
A tuple is a built-in data type in Python. It's used for storing multiple items in a single variable.
895.List, like a tuple, is also used for storing multiple items. What is then, the difference between a tuple and a list?
List, as opposed to a tuple, is a mutable data type. It means we can modify it and at items to it.
896.What is the result of each of the following?
1 > 2
'b' > 'a'
1 == 'one'
2 > 'one'
False
True
False
TypeError
896.What is the result of of each of the following?
"abc"*3
"abc"*2.5
"abc"*2.0
"abc"*True
"abc"*False
abcabcabc
TypeError
TypeError
"abc"
""
897.What is the result of `bool("")`? What about `bool(" ")`? Explain
bool("") -> evaluates to False
bool(" ") -> evaluates to True
898.What is the result of running [] is not []? explain the result
It evaluates to True.
The reason is that the two created empty list are different objects. x is y only evaluates to true when x and y are the same object.
899.Improve the following code:
char = input("Insert a character: ")
if char == "a" or char == "o" or char == "e" or char =="u" or char == "i":
print("It's a vowel!")
char = input("Insert a character: ") # For readablity
if lower(char[0]) in "aieou": # Takes care of multiple characters and separate cases
print("It's a vowel!")
OR
if lower(input("Insert a character: ")[0]) in "aieou": # Takes care of multiple characters and small/Capital cases
print("It's a vowel!")
900.How to define a function with Python?
Using the `def` keyword. For Examples:
def sum(a, b):
return (a + b)
901.In Python, functions are first-class objects. What does it mean?
In general, first class objects in programming languages are objects which can be assigned to variable, used as a return value and can be used as arguments or parameters.
In python you can treat functions this way. Let's say we have the following function
def my_function():
return 5
You can then assign a function to a variables like this x = my_function or you can return functions as return values like this return my_function
902.Explain inheritance and how TO USE it in Python
By definition inheritance is the mechanism where an object acts as a base of another object, retaining all its
properties.
So if Class B inherits from Class A, every characteristics from class A will be also available in class B.
Class A would be the 'Base class' and B class would be the 'derived class'.
This comes handy when you have several classes that share the same functionalities.
The basic syntax is:
class Base: pass
class Derived(Base): pass
A more forged example:
class Animal:
def __init__(self):
print("and I'm alive!")
def eat(self, food):
print("ñom ñom ñom", food)
class Human(Animal):
def __init__(self, name):
print('My name is ', name)
super().__init__()
def write_poem(self):
print('Foo bar bar foo foo bar!')
class Dog(Animal):
def __init__(self, name):
print('My name is', name)
super().__init__()
def bark(self):
print('woof woof')
michael = Human('Michael')
michael.eat('Spam')
michael.write_poem()
bruno = Dog('Bruno')
bruno.eat('bone')
bruno.bark()
>>> My name is Michael
>>> and I'm alive!
>>> ñom ñom ñom Spam
>>> Foo bar bar foo foo bar!
>>> My name is Bruno
>>> and I'm alive!
>>> ñom ñom ñom bone
>>> woof woof
Calling super() calls the Base method, thus, calling super().__init__() we called the Animal __init__.
There is a more advanced python feature called MetaClasses that aid the programmer to directly control cla
903.Explain and demonstrate class attributes & instance attributes
In the following block of code x is a class attribute while self.y is a instance attribute
class MyClass(object):
x = 1
def __init__(self, y):
self.y = y
904.What is an error? What is an exception? What types of exceptions are you familiar with?
# Note that you generally don't need to know the compiling process but knowing where everything comes from
# and giving complete answers shows that you truly know what you are talking about.
Generally, every compiling process have a two steps.
- Analysis
- Code Generation.
Analysis can be broken into:
1. Lexical analysis (Tokenizes source code)
2. Syntactic analysis (Check whether the tokens are legal or not, tldr, if syntax is correct)
for i in 'foo'
^
SyntaxError: invalid syntax
We missed ':'
3. Semantic analysis (Contextual analysis, legal syntax can still trigger errors, did you try to divide by 0,
hash a mutable object or use an undeclared function?)
1/0
ZeroDivisionError: division by zero
These three analysis steps are the responsible for error handlings.
The second step would be responsible for errors, mostly syntax errors, the most common error.
The third step would be responsible for Exceptions.
As we have seen, Exceptions are semantic errors, there are many builtin Exceptions:
ImportError
ValueError
KeyError
FileNotFoundError
IndentationError
IndexError
...
You can also have user defined Exceptions that have to inherit from the `Exception` class, directly or indirectly.
Basic example:
class DividedBy2Error(Exception):
def __init__(self, message):
self.message = message
def division(dividend,divisor):
if divisor == 2:
raise DividedBy2Error('I dont want you to divide by 2!')
return dividend / divisor
division(100, 2)
>>> __main__.DividedBy2Error: I dont want you to divide by 2!
905.Explain Exception Handling and how to use it in Python
Exceptions: Errors detected during execution are called Exceptions.
Handling Exception: When an error occurs, or exception as we call it, Python will normally stop and generate an error message.
Exceptions can be handled using try and except statement in python.
Example: Following example asks the user for input until a valid integer has been entered.
If user enter a non-integer value it will raise exception and using except it will catch that exception and ask the user to enter valid integer again.
while True:
try:
a = int(input("please enter an integer value: "))
break
except ValueError:
print("Ops! Please enter a valid integer value.")
906.Explain the following built-in functions (their purpose + use case example):
repr
any
all
907.What is the difference between repr function and str?
908.What is the __call__ method?
909.Do classes has the __call__ method as well? What for?
910.What _ is used for in Python?
Translation lookup in i18n
Hold the result of the last executed expression or statement in the interactive interpreter.
As a general purpose "throwaway" variable name. For example: x, y, _ = get_data() (x and y are used but since we don't care about third variable, we "threw it away").
911.Explain what is GIL
912.What is Lambda? How is it used?
A lambda expression is an 'anonymous' function, the difference from a normal defined function using the keyword `def`` is the syntax and usage.
The syntax is:
lambda[parameters]: [expresion]
Examples:
A lambda function add 10 with any argument passed.
A lambda function add 10 with any argument passed.
x = lambda a: a + 10
print(x(10))
An addition function
addition = lambda x, y: x + y
print(addition(10, 20))
Squaring function
square = lambda x : x ** 2
print(square(5))
Generally it is considered a bad practice under PEP 8 to assign a lambda expresion, they are meant to be used as parameters and inside of other defined functions.
913.Are there private variables in Python? How would you make an attribute of a class, private?
914.Explain the following:
getter
setter
deleter
915.Explain what is @property
916.How do you swap values between two variables?
x, y = y, x
917.Explain the following object's magic variables:
dict
918.Write a function to return the sum of one or more numbers. The user will decide how many numbers to use
First you ask the user for the amount of numbers that will be use. Use a while loop that runs until amount_of_numbers becomes 0 through subtracting amount_of_numbers by one each loop. In the while loop you want ask the user for a number which will be added a variable each time the loop runs.
def return_sum():
amount_of_numbers = int(input("How many numbers? "))
total_sum = 0
while amount_of_numbers != 0:
num = int(input("Input a number. "))
total_sum += num
amount_of_numbers -= 1
return total_sum
919.Print the average of [2, 5, 6]. It should be rounded to 3 decimal places
li = [2, 5, 6]
print("{0:.3f}".format(sum(li)/len(li)))
920.How to add the number 2 to the list x = [1, 2, 3]
x.append(2)
921.How to check how many items a list contains?
len(some_list)
922.How to get the last element of a list?
some_list[-1]
923.How to add the items of [1, 2, 3] to the list [4, 5, 6]?
x = [4, 5, 6] x.extend([1, 2, 3])
Don't use append unless you would like the list as one item.
924.How to remove the first 3 items from a list?
my_list[0:3] = []
925.How do you get the maximum and minimum values from a list?
Maximum: max(some_list)
Minimum: min(some_list)
926.How to get the top/biggest 3 items from a list?
sorted(some_list, reverse=True)[:3]
Or
some_list.sort(reverse=True)
some_list[:3]
How to insert an item
927.How to insert an item to the beginning of a list? What about two items?
928.How to sort list by the length of items?
sorted_li = sorted(li, key=len)
Or without creating a new list:
li.sort(key=len)
929.Do you know what is the difference between list.sort() and sorted(list)?
sorted(list) will return a new list (original list doesn't change)
list.sort() will return None but the list is change in-place
sorted() works on any iterable (Dictionaries, Strings, ...)
list.sort() is faster than sorted(list) in case of Lists
930.Convert every string to an integer: [['1', '2', '3'], ['4', '5', '6']]
nested_li = [['1', '2', '3'], ['4', '5', '6']]
[[int(x) for x in li] for li in nested_li]
931.How to merge two sorted lists into one sorted list?
sorted(li1 + li2)
Another way:
i, j = 0
merged_li = []
while i < len(li1) and j < len(li2):
if li1[i] < li2[j]:
merged_li.append(li1[i])
i += 1
else:
merged_li.append(li2[j])
j += 1
merged_li = merged_li + merged_li[i:] + merged_li[j:]
932.How to check if all the elements in a given lists are unique? so [1, 2, 3] is unique but [1, 1, 2, 3] is not unique because 1 exists twice
There are many ways of solving this problem:
# Note: :list and -> bool are just python typings, they are not needed for the correct execution of the algorithm.
Taking advantage of sets and len:
def is_unique(l:list) -> bool:
return len(set(l)) == len(l)
This one is can be seen used in other programming languages.
def is_unique2(l:list) -> bool:
seen = []
for i in l:
if i in seen:
return False
seen.append(i)
return True
Here we just count and make sure every element is just repeated once.
def is_unique3(l:list) -> bool:
for i in l:
if l.count(i) > 1:
return False
return True
This one might look more convulated but hey, one liners.
def is_unique4(l:list) -> bool:
return all(map(lambda x: l.count(x) < 2, l))
933.you have the following function
def my_func(li = []):
li.append("hmm")
print(li)
If we call it 3 times, what would be the result each call?
['hmm']
['hmm', 'hmm']
['hmm', 'hmm', 'hmm']
934.How to iterate over a list?
for item in some_list:
print(item)
935.How to iterate over a list with indexes?
for i, item in enumerate(some_list):
print(i)
936.How to start list iteration from 2nd index?
Using range like this
for i in range(1, len(some_list)):
some_list[i]
Another way is using slicing
for i in some_list[1:]:
937.How to iterate over a list in reverse order?
Method 1
for i in reversed(li):
...
Method 2
n = len(li) - 1
while n > 0:
...
n -= 1
938.Sort a list of lists by the second item of each nested list
li = [[1, 4], [2, 1], [3, 9], [4, 2], [4, 5]]
sorted(li, key=lambda l: l[1])
or
li.sort(key=lambda l: l[1)
939.Combine [1, 2, 3] and ['x', 'y', 'z'] so the result is [(1, 'x'), (2, 'y'), (3, 'z')]
nums = [1, 2, 3]
letters = ['x', 'y', 'z']
list(zip(nums, letters))
940.What is List Comprehension? Is it better than a typical loop? Why? Can you demonstrate how to use it?
941.You have the following list: [{'name': 'Mario', 'food': ['mushrooms', 'goombas']}, {'name': 'Luigi', 'food': ['mushrooms', 'turtles']}] Extract all type of foods. Final output should be: {'mushrooms', 'goombas', 'turtles'}
brothers_menu = \
[{'name': 'Mario', 'food': ['mushrooms', 'goombas']}, {'name': 'Luigi', 'food': ['mushrooms', 'turtles']}]
# "Classic" Way
def get_food(brothers_menu) -> set:
temp = []
for brother in brothers_menu:
for food in brother['food']:
temp.append(food)
return set(temp)
# One liner way (Using list comprehension)
set([food for bro in x for food in bro['food']])
942.How to create a dictionary?
my_dict = dict(x=1, y=2) OR my_dict = {'x': 1, 'y': 2} OR my_dict = dict([('x', 1), ('y', 2)])
943.How to remove a key from a dictionary?
del my_dict['some_key'] you can also use my_dict.pop('some_key') which returns the value of the key.
944.How to sort a dictionary by values?
{k: v for k, v in sorted(x.items(), key=lambda item: item[1])}
945.How to sort a dictionary by keys?
dict(sorted(some_dictionary.items()))
946.How to merge two dictionaries?
some_dict1.update(some_dict2)
947.Convert the string "a.b.c" to the dictionary {'a': {'b': {'c': 1}}}
output = {}
string = "a.b.c"
path = string.split('.')
target = reduce(lambda d, k: d.setdefault(k, {}), path[:-1], output)
target[path[-1]] = 1
print(output)
948.Can you implement "binary search" in Python?
#!/usr/bin/env python
import random
def binary_search(arr, lb, ub, target):
"""
A Binary Search Example which has O(log n) time complexity.
"""
if lb <= ub:
mid = ub + lb // 2
if arr[mid] == target:
return mid
elif arr[mid] < target:
return binary_search(arr, mid + 1, ub, target)
else:
return binary_search(arr, lb, mid - 1, target)
else:
return -1
if __name__ == '__main__':
rand_num_li = sorted([random.randint(1, 50) for _ in range(10)])
target = random.randint(1, 50)
print("List: {}\nTarget: {}\nIndex: {}".format(
rand_num_li, target,
binary_search(rand_num_li, 0, len(rand_num_li) - 1, target)))
949.How to write to a file?
with open('file.txt', 'w') as file:
file.write("My insightful comment")
950.How to print the 12th line of a file?
951.How to reverse a file?
952.Sum all the integers in a given file
953.Print a random line of a given file
954.Print every 3rd line of a given file
955.Print the number of lines in a given file
956.Print the number of of words in a given file
957.Can you write a function which will print all the file in a given directory? including sub-directories
958.Write a dictionary (variable) to a file
import json
with open('file.json', 'w') as f:
f.write(json.dumps(dict_var))
959.Find the first repeated character in a string
While you iterate through the characters, store them in a dictionary and check for every character whether it's already in the dictionary.
def firstRepeatedCharacter(str):
chars = {}
for ch in str:
if ch in chars:
return ch
else:
chars[ch] = 0
960.How to extract the unique characters from a string? for example given the input "itssssssameeeemarioooooo" the output will be "mrtisaoe"
x = "itssssssameeeemarioooooo"
y = ''.join(set(x))
961.Find all the permutations of a given string
def permute_string(string):
if len(string) == 1:
return [string]
permutations = []
for i in range(len(string)):
swaps = permute_string(string[:i] + string[(i+1):])
for swap in swaps:
permutations.append(string[i] + swap)
return permutations
print(permute_string("abc"))
962.How to check if a string contains a sub string?
963.Find the frequency of each character in string
964.Count the number of spaces in a string
965.Given a string, find the N most repeated words
966.Given the string (which represents a matrix) "1 2 3\n4 5 6\n7 8 9" create rows and colums variables (should contain integers, not strings)
967.Explain the following types of methods and how to use them:
Static method
Class method
instance method
968.How to reverse a list?
969.How to combine list of strings into one string with spaces between the strings
970.You have the following list of nested lists: [['Mario', 90], ['Geralt', 82], ['Gordon', 88]] How to sort the list by the numbers in the nested lists?
the_list.sort(key=lambda x: x[1])
971.Explain the following:
zip()
map()
filter()
972.Can you implement a linked list in Python?
The reason we need to implement in the first place, it's because a linked list isn't part of Python standard library.
To implement a linked list, we have to implement two structures: The linked list itself and a node which is used by the linked list.
Let's start with a node. A node has some value (the data it holds) and a pointer to the next node
class Node(object):
def __init__(self, data):
self.data = data
self.next = None
Now the linked list. An empty linked list has nothing but an empty head.
class LinkedList(object):
def __init__(self):
self.head = None
Now we can start using the linked list
ll = Linkedlist()
ll.head = Node(1)
ll.head.next = Node(2)
ll.head.next.next = Node(3)
What we have is:
| 1 | -> | 2 | -> | 3 |
973.Add a method to the Linked List class to traverse (print every node's data) the linked list
def print_list(self): node = self.head while(node): print(node.data) node = node.next
974.Write a method to that will return a boolean based on whether there is a loop in a linked list or not
Let's use the Floyd's Cycle-Finding algorithm:
def loop_exists(self):
one_step_p = self.head
two_steps_p = self.head
while(one_step_p and two_steps_p and two_steps_p.next):
one_step_p = self.head.next
two_step_p = self.head.next.next
if (one_step_p == two_steps_p):
return True
return False
975.Implement simple calculator for two numbers
def add(num1, num2):
return num1 + num2
def sub(num1, num2):
return num1 - num2
def mul(num1, num2):
return num1*num2
def div(num1, num2):
return num1 / num2
operators = {
'+': add,
'-': sub,
'*': mul,
'/': div
}
if __name__ == '__main__':
operator = str(input("Operator: "))
num1 = int(input("1st number: "))
num2 = int(input("2nd number: "))
print(operators[operator](num1, num2))
976.What data types are you familiar with that are not Python built-in types but still provided by modules which are part of the standard library?
This is a good reference https://docs.python.org/3/library/datatypes.html
977.Explain what is a decorator
In python, everything is an object, even functions themselves. Therefore you could pass functions as arguments for another function eg;
def wee(word):
return word
def oh(f):
return f + "Ohh"
>>> oh(wee("Wee"))
<<< Wee Ohh
This allows us to control the before execution of any given function and if we added another function as wrapper, (a function receiving another function that receives a function as parameter) we could also control the after execution.
Sometimes we want to control the before-after execution of many functions and it would get tedious to write
f = function(function_1()) f = function(function_1(function_2(*args)))
978.Can you show how to write and use decorators?
These two decorators (ntimes and timer) are usually used to display decorators functionalities, you can find them in lots of tutorials/reviews. I first saw these examples two years ago in pyData 2017. https://www.youtube.com/watch?v=7lmCu8wz8ro&t=3731s
Simple decorator:
def deco(f):
print(f"Hi I am the {f.__name__}() function!")
return f
@deco
def hello_world():
return "Hi, I'm in!"
a = hello_world()
print(a)
>>> Hi I am the hello_world() function!
Hi, I'm in!
This is the simplest decorator version, it basically saves us from writting a = deco(hello_world()). But at this point we can only control the before execution, let's take on the after:
def deco(f):
def wrapper(*args, **kwargs):
print("Rick Sanchez!")
func = f(*args, **kwargs)
print("I'm in!")
return func
return wrapper
@deco
def f(word):
print(word)
a = f("*********")
>>> Rick Sanchez!
*********
I'm in!
Prometheus
979.What is Prometheus? What are some of Prometheus's main features?
980.Describe Prometheus architecture and components
981.Can you compare Prometheus to other solutions like InfluxDB for example?
982.What is an Alert?
983.Describe the following Prometheus components:
Prometheus server
Push Gateway
Alert Manager
984.What is an Instance? What is a Job?
985.What core metrics types Prometheus supports?
986.What is an exporter? What is it used for?
987.Which Prometheus best practices are you familiar with?. Name at least three
988.How to get total requests in a given period of time?
989.What HA in Prometheus means?
990.How do you join two metrics?
991.How to write a query that returns the value of a label?
992.How do you convert cpu_user_seconds to cpu usage in percentage?
GIT
993.How do you know if a certain directory is a git repository?
You can check if there is a ".git" directory inside it.
994.How to check if a file is tracked and if not, then track it?
995.What is the difference between git pull and git fetch?
Shortly, git pull = git fetch + git merge
When you run git pull, it gets all the changes from the remote or central repository and attaches it to your corresponding branch in your local repository.
git fetch gets all the changes from the remote repository, stores the changes in a separate branch in your local repository
996.Explain the following: git directory, working directory and staging area
The Git directory is where Git stores the meta data and object database for your project. This is the most important part of Git, and it is what is copied when you clone a repository from another computer.
The working directory is a single checkout of one version of the project. These files are pulled out of the compressed database in the Git directory and placed on disk for you to use or modify.
The staging area is a simple file, generally contained in your Git directory, that stores information about what will go into your next commit. It’s sometimes referred to as the index, but it’s becoming standard to refer to it as the staging area.
This answer taken from git-scm.com
997.How to resolve git merge conflicts?
First, you open the files which are in conflict and identify what are the conflicts. Next, based on what is accepted in your company or team, you either discuss with your colleagues on the conflicts or resolve them by yourself After resolving the conflicts, you add the files with `git add ` Finally, you run `git rebase --continue`
998.What is the difference between git reset and git revert?
git revert creates a new commit which undoes the changes from last commit.
git reset depends on the usage, can modify the index or change the commit which the branch head is currently pointing at
999.You would like to move forth commit to the top. How would you achieve that?
Using the git rebase command
1000.n what situations are you using git rebase?
1001.What merge strategies are you familiar with?
Mentioning two or three should be enough and it's probably good to mention that 'recursive' is the default one.
recursive resolve ours theirs
This page explains it the best: https://git-scm.com/docs/merge-strategies
MERGE STRATEGIES
The merge mechanism (git merge and git pull commands) allows the backend merge strategies to be chosen with -s option. Some strategies can also take their own options, which can be passed by giving -X<option> arguments to git merge and/or git pull.
resolve
This can only resolve two heads (i.e. the current branch and another branch you pulled from) using a 3-way merge algorithm. It tries to carefully detect criss-cross merge ambiguities and is considered generally safe and fast.
recursive
This can only resolve two heads using a 3-way merge algorithm. When there is more than one common ancestor that can be used for 3-way merge, it creates a merged tree of the common ancestors and uses that as the reference tree for the 3-way merge. This has been reported to result in fewer merge conflicts without causing mismerges by tests done on actual merge commits taken from Linux 2.6 kernel development history. Additionally this can detect and handle merges involving renames, but currently cannot make use of detected copies. This is the default merge strategy when pulling or merging one branch.
The recursive strategy can take the following options:
ours
This option forces conflicting hunks to be auto-resolved cleanly by favoring our version. Changes from the other tree that do not conflict with our side are reflected in the merge result. For a binary file, the entire contents are taken from our side.
This should not be confused with the ours merge strategy, which does not even look at what the other tree contains at all. It discards everything the other tree did, declaring our history contains all that happened in it.
theirs
This is the opposite of ours; note that, unlike ours, there is no theirs merge strategy to confuse this merge option with.
patience
With this option, merge-recursive spends a little extra time to avoid mismerges that sometimes occur due to unimportant matching lines (e.g., braces from distinct functions). Use this when the branches to be merged have diverged wildly. See also git-diff[1] --patience.
diff-algorithm=[patience|minimal|histogram|myers]
Tells merge-recursive to use a different diff algorithm, which can help avoid mismerges that occur due to unimportant matching lines (such as braces from distinct functions). See also git-diff[1] --diff-algorithm.
ignore-space-change
ignore-all-space
ignore-space-at-eol
ignore-cr-at-eol
Treats lines with the indicated type of whitespace change as unchanged for the sake of a three-way merge. Whitespace changes mixed with other changes to a line are not ignored. See also git-diff[1] -b, -w, --ignore-space-at-eol, and --ignore-cr-at-eol.
If their version only introduces whitespace changes to a line, our version is used;
If our version introduces whitespace changes but their version includes a substantial change, their version is used;
Otherwise, the merge proceeds in the usual way.
renormalize
This runs a virtual check-out and check-in of all three stages of a file when resolving a three-way merge. This option is meant to be used when merging branches with different clean filters or end-of-line normalization rules. See "Merging branches with differing checkin/checkout attributes" in gitattributes[5] for details.
no-renormalize
Disables the renormalize option. This overrides the merge.renormalize configuration variable.
no-renames
Turn off rename detection. This overrides the merge.renames configuration variable. See also git-diff[1] --no-renames.
find-renames[=<n>]
Turn on rename detection, optionally setting the similarity threshold. This is the default. This overrides the merge.renames configuration variable. See also git-diff[1] --find-renames.
rename-threshold=<n>
Deprecated synonym for find-renames=<n>.
subtree[=<path>]
This option is a more advanced form of subtree strategy, where the strategy makes a guess on how two trees must be shifted to match with each other when merging. Instead, the specified path is prefixed (or stripped from the beginning) to make the shape of two trees to match.
octopus
This resolves cases with more than two heads, but refuses to do a complex merge that needs manual resolution. It is primarily meant to be used for bundling topic branch heads together. This is the default merge strategy when pulling or merging more than one branch.
1002.How can you see which changes have done before committing them?
`git diff```
1003.How do you revert a specific file to previous commit?
git checkout HEAD~1 -- /path/of/the/file
1004.How to squash last two commits?
1005.What is the .git directory? What can you find there?
The .git folder contains all the information that is necessary for your project in version control and all the information about commits, remote repository address, etc. All of them are present in this folder. It also contains a log that stores your commit history so that you can roll back to history.
This info copied from https://stackoverflow.com/questions/29217859/what-is-the-git-folder
1006.What are some Git anti-patterns? Things that you shouldn't do
Not waiting too long between commits
Not removing the .git directory :)
1007.How do you remove a remote branch?
You delete a remote branch with this syntax:
git push origin :[branch_name]
1008.Are you familiar with gitattributes? When would you use it?
gitattributes allow you to define attributes per pathname or path pattern.
You can use it for example to control endlines in files. In Windows and Unix based systems, you have different characters for new lines (\r\n and \n accordingly). So using gitattributes we can align it for both Windows and Unix with * text=auto in .gitattributes for anyone working with git. This is way, if you use the Git project in Windows you'll get \r\n and if you are using Unix or Linux, you'll get \n.
1009.How do you discard local file changes? (before commit)
git checkout -- <file_name>
1010.How do you discard local commits?
git reset HEAD~1 for removing last commit If you would like to also discard the changes you `git reset --hard``
1011.True or False? To remove a file from git but not from the filesystem, one should use git rm
False. If you would like to keep a file on your filesystem, use git reset <file_name>
1012.Explain Git octopus merge
Probably good to mention that it's:
It's good for cases of merging more than one branch (and also the default of such use cases)
It's primarily meant for bundling topic branches together
devsecops
1013.What is DevSecOps? What its core principals?
1014.What security techniques are you familiar with? (or what security techniques have you used in the past?)
1015.What the "Zero Trust" concept means? How Organizations deal with it?
1016.Explain Authentication and Authorization
1017.How do you manage sensitive information (like passwords) in different tools and platforms?
1018.Explain what is Single Sign-On
1019.Explain MFA (Multi-Factor Authentication)
1020.Explain RBAC (Role-based Access Control)
PUPPET
1021.What is Puppet? How does it works?
1022.Explain Puppet architecture
1023.Can you compare Puppet to other configuration management tools? Why did you chose to use Puppet?
1024.Explain the following:
Module
Manifest
Node
1025.Explain Facter
1026.What is MCollective?
1027.Do you have experience with writing modules? Which module have you created and for what?
1028.Explain what is Hiera
Quote
Topic Tags