<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Ravi Kyada - The DevOps Guy]]></title><description><![CDATA[Ravi Kyada - The DevOps Guy]]></description><link>https://hashnode.ravikyada.in</link><generator>RSS for Node</generator><lastBuildDate>Tue, 14 Apr 2026 02:41:55 GMT</lastBuildDate><atom:link href="https://hashnode.ravikyada.in/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Optimizing Networking Costs in AWS EKS: Managing Cross-AZ Database Traffic]]></title><description><![CDATA[Managing AWS EKS clusters provides excellent scalability and reliability for running containerized applications. However, without careful cost monitoring, expenses can quickly add up. This article breaks down practical strategies to help you reduce A...]]></description><link>https://hashnode.ravikyada.in/optimizing-networking-costs-in-aws-eks-managing-cross-az-database-traffic-84e012913038</link><guid isPermaLink="true">https://hashnode.ravikyada.in/optimizing-networking-costs-in-aws-eks-managing-cross-az-database-traffic-84e012913038</guid><category><![CDATA[AWS]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[EKS]]></category><category><![CDATA[cost-optimisation]]></category><category><![CDATA[networking]]></category><dc:creator><![CDATA[Ravi Kyada]]></dc:creator><pubDate>Sat, 04 Jan 2025 14:59:52 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1736002578706/da496070-32ec-46df-86c6-09f9308e56a0.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Managing <a target="_blank" href="https://aws.amazon.com/eks/">AWS EKS</a> clusters provides excellent scalability and reliability for running containerized applications. However, without careful cost monitoring, expenses can quickly add up. This article breaks down practical strategies to help you reduce AWS EKS costs while keeping your applications fast and dependable.</p>
<p>Recently, we faced a significant challenge with our AWS EKS setup. Due to cross-AZ (Availability Zone) traffic, networking costs escalated.</p>
<p>Our high-workload application, built using <a target="_blank" href="https://nodejs.org/en"><strong>Node.js</strong></a> and <a target="_blank" href="https://www.php.net/"><strong>PHP</strong></a>, processes a large number of concurrent requests. The dynamic nature of this workload often triggers auto-scaling events, further adding to the complexity of resource management.</p>
<p>The application interacts with <a target="_blank" href="https://redis.io/"><strong>Redis</strong></a>, <a target="_blank" href="https://en.wikipedia.org/wiki/SQL"><strong>MySQL</strong></a>, and <a target="_blank" href="https://www.mongodb.com/"><strong>MongoDB</strong></a>, all deployed in Stateful Sets with PVs and PVCs. These databases are distributed across three AZs to ensure high availability and fault tolerance.</p>
<p>However, the frequent queries from the application combined with database traffic crossing AZ boundaries significantly contributed to increased networking costs.</p>
<p>This blog outlines the root causes, the solutions we explored, and actionable steps to optimize networking costs while maintaining high availability and resilience.</p>
<h3 id="heading-breaking-down-eks-costs">Breaking Down EKS Costs</h3>
<p>When using AWS EKS, several cost components come into play:</p>
<h3 id="heading-1-eks-management-fee">1. EKS Management Fee:</h3>
<p>AWS charges a flat fee of <strong>$0.10 per hour per cluster</strong>, which translates to approximately <strong>$73 per month</strong> per cluster, regardless of the size of the cluster. This fee covers the managed control plane, which includes components like the Kubernetes API server and etcd for managing cluster state.</p>
<h3 id="heading-2-ec2-node-costs">2. EC2 Node Costs:</h3>
<p>EKS requires worker nodes (EC2 instances) to run your workloads. The cost of these nodes depends on:</p>
<ul>
<li><p>Instance type (e.g., t2.medium, m5.large).</p>
</li>
<li><p>Number of nodes in your cluster.</p>
</li>
<li><p>Uptime of these nodes.<br />  Additionally, costs can increase if you enable <strong>auto-scaling</strong>, as the cluster adds nodes dynamically based on workload demand.</p>
</li>
</ul>
<h3 id="heading-3-networking-costs">3. Networking Costs:</h3>
<p>Networking is amongst the most significant contributors to EKS costs, especially in setups using multiple AZs. Key contributors include:</p>
<ul>
<li><p><strong>Cross-AZ Data Transfer</strong>: AWS charges $0.01 per GB for data transfer between AZs. This cost can quickly add up when running services like databases that frequently communicate across AZs.</p>
</li>
<li><p><strong>Inter-VPC Traffic</strong>: If your EKS cluster interacts with other VPCs or external services, you’ll incur additional charges.</p>
</li>
<li><p><a target="_blank" href="https://aws.amazon.com/elasticloadbalancing/"><strong>Elastic Load Balancer (ELB)</strong>:</a> Any ingress traffic into the cluster using an ELB incurs data transfer costs, which are billed separately.</p>
</li>
</ul>
<h3 id="heading-4-persistent-storage-costs">4. Persistent Storage Costs:</h3>
<p>Stateful workloads, such as databases, rely on storage like Amazon EBS (Elastic Block Store) for data persistence. Costs depend on:</p>
<ul>
<li><p><strong>Volume size and type</strong> (e.g., General Purpose SSD, Provisioned IOPS).</p>
</li>
<li><p><strong>Snapshot storage for backups</strong>.</p>
</li>
<li><p><strong>IOPS</strong> charges for higher-performance workloads.</p>
</li>
</ul>
<h3 id="heading-5-additional-services">5. Additional Services:</h3>
<ul>
<li><p><a target="_blank" href="https://aws.amazon.com/cloudwatch/"><strong>CloudWatch</strong></a>: For logging and monitoring, AWS charges for log ingestion, storage, and data retrieval.</p>
</li>
<li><p><strong>Data Transfer Out</strong>: Traffic leaving AWS (e.g., to the internet or external systems) is billed at $0.09 per GB for the first 10TB each month.</p>
</li>
</ul>
<blockquote>
<p><em>Tip: To manage DNS configurations with your AWS EKS, you can refer to our article on</em> <a target="_blank" href="https://ravijkyada.medium.com/deploy-external-dns-with-aws-eks-3d0157174169"><em>Deploying External DNS with AWS EKS</em></a><em>, which explores efficient integration strategies to manage wast amount of DNS.</em></p>
</blockquote>
<h3 id="heading-solutions-explored">Solutions Explored</h3>
<p>To address these issues, we considered multiple approaches:</p>
<h3 id="heading-1-topology-aware-service-routing">1. Topology-Aware Service Routing</h3>
<ul>
<li><p><strong>Objective</strong>: Ensure traffic remains within the same AZ whenever possible.</p>
</li>
<li><p><strong>Implementation</strong>: Enable <code>Service Topology</code> in Kubernetes by annotating services to route traffic to pods within the same AZ.</p>
</li>
</ul>
<p><strong>Benefits</strong>:</p>
<ul>
<li><p>Reduced cross-AZ traffic for database queries.</p>
</li>
<li><p>Improved latency due to AZ-local traffic.</p>
</li>
</ul>
<h3 id="heading-2-caching-frequently-accessed-data">2. Caching Frequently Accessed Data</h3>
<ul>
<li><p><strong>Objective</strong>: Reduce the frequency of repetitive queries to the database.</p>
</li>
<li><p><strong>Implementation</strong>:</p>
</li>
<li><p>Integrate <strong>Redis</strong> as a cache layer for read-heavy operations.</p>
</li>
<li><p>Use application-level caching with TTLs for frequent queries.</p>
</li>
</ul>
<p><strong>Benefits</strong>:</p>
<ul>
<li><p>Significantly reduced database queries.</p>
</li>
<li><p>Improved response times for cached data.</p>
</li>
</ul>
<p>Our high-workload application, built using <strong>Node.js</strong> and <strong>PHP</strong>, processes a large volume of concurrent requests. The dynamic nature of this workload often triggers auto-scaling events, further adding to the complexity of resource management.</p>
<p>The application interacts with <strong>Redis</strong>, <strong>MySQL</strong>, and <strong>MongoDB</strong>, all deployed in Stateful Sets with PVs and PVCs. These databases are distributed across three AZs to ensure high availability and fault tolerance.</p>
<p>However, the frequent queries from the application combined with database traffic crossing AZ boundaries significantly contributed to increased networking costs.</p>
<p>This blog outlines the root causes, the solutions we explored, and actionable steps to optimize networking costs while maintaining high availability and resilience.</p>
<h3 id="heading-breaking-down-eks-costs-1">Breaking Down EKS Costs</h3>
<p>When using AWS EKS, several cost components come into play:</p>
<h4 id="heading-1-eks-management-fee-1"><strong>1. EKS Management Fee</strong>:</h4>
<p>AWS charges a flat fee of <strong>$0.10 per hour per cluster</strong>, which translates to approximately <strong>$73 per month</strong> per cluster, regardless of the size of the cluster. This fee covers the managed control plane, which includes components like the Kubernetes API server and etcd for managing cluster state.</p>
<h4 id="heading-2-ec2-node-costs-1"><strong>2. EC2 Node Costs</strong>:</h4>
<p>EKS requires worker nodes (EC2 instances) to run your workloads. The cost of these nodes depends on:</p>
<ul>
<li><p>Instance type (e.g., t2.medium, m5.large).</p>
</li>
<li><p>Number of nodes in your cluster.</p>
</li>
<li><p>Uptime of these nodes.<br />  Additionally, costs can increase if you enable <strong>auto-scaling</strong>, as the cluster adds nodes dynamically based on workload demand.</p>
</li>
</ul>
<h4 id="heading-3-networking-costs-1"><strong>3. Networking Costs</strong>:</h4>
<p>Networking is amongst the most significant contributors to EKS costs, especially in setups using multiple AZs. Key contributors include:</p>
<ul>
<li><p><strong>Cross-AZ Data Transfer</strong>: AWS charges $0.01 per GB for data transfer between AZs. This cost can quickly add up when running services like databases that frequently communicate across AZs.</p>
</li>
<li><p><strong>Inter-VPC Traffic</strong>: If your EKS cluster interacts with other VPCs or external services, you’ll incur additional charges.</p>
</li>
<li><p><strong>Elastic Load Balancer (ELB)</strong>: Any ingress traffic into the cluster using an ELB incurs data transfer costs, which are billed separately.</p>
</li>
</ul>
<h4 id="heading-4-persistent-storage-costs-1">4. <strong>Persistent Storage Costs</strong>:</h4>
<p>Stateful workloads, such as databases, rely on storage like Amazon EBS (Elastic Block Store) for data persistence. Costs depend on:</p>
<ul>
<li><p><strong>Volume size and type</strong> (e.g., General Purpose SSD, Provisioned IOPS).</p>
</li>
<li><p><strong>Snapshot storage for backups</strong>.</p>
</li>
<li><p><strong>IOPS</strong> charges for higher-performance workloads.</p>
</li>
</ul>
<h4 id="heading-5-additional-services-1"><strong>5. Additional Services</strong>:</h4>
<ul>
<li><p><strong>CloudWatch</strong>: For logging and monitoring, AWS charges for log ingestion, storage, and data retrieval.</p>
</li>
<li><p><strong>Data Transfer Out</strong>: Traffic leaving AWS (e.g., to the internet or external systems) is billed at $0.09 per GB for the first 10TB each month.</p>
</li>
</ul>
<blockquote>
<p>Tip: To manage DNS configurations with your AWS EKS, you can refer to our article on <a target="_blank" href="https://ravijkyada.medium.com/deploy-external-dns-with-aws-eks-3d0157174169">Deploying External DNS with AWS EKS</a>, which explores efficient integration strategies to manage wast amount of DNS.</p>
</blockquote>
<h3 id="heading-solutions-explored-1">Solutions Explored</h3>
<p>To address these issues, we considered multiple approaches:</p>
<h4 id="heading-1-topology-aware-service-routing-1">1. Topology-Aware Service Routing</h4>
<ul>
<li><p><strong>Objective</strong>: Ensure traffic remains within the same AZ whenever possible.</p>
</li>
<li><p><strong>Implementation</strong>: Enable <code>Service Topology</code> in Kubernetes by annotating services to route traffic to pods within the same AZ.</p>
</li>
</ul>
<p><strong>Benefits</strong>:</p>
<ul>
<li><p>Reduced cross-AZ traffic for database queries.</p>
</li>
<li><p>Improved latency due to AZ-local traffic.</p>
</li>
</ul>
<h4 id="heading-2-caching-frequently-accessed-data-1">2. Caching Frequently Accessed Data</h4>
<ul>
<li><p><strong>Objective</strong>: Reduce the frequency of repetitive queries to the database.</p>
</li>
<li><p><strong>Implementation</strong>:</p>
</li>
<li><p>Integrate <strong>Redis</strong> as a cache layer for read-heavy operations.</p>
</li>
<li><p>Use application-level caching with TTLs for frequent queries.</p>
</li>
</ul>
<p><strong>Benefits</strong>:</p>
<ul>
<li><p>Significantly reduced database queries.</p>
</li>
<li><p>Improved response times for cached data.</p>
</li>
</ul>
<h4 id="heading-3-read-replica-configuration">3. Read Replica Configuration</h4>
<ul>
<li><strong>Objective</strong>: Offload read traffic to replicas deployed in each AZ.</li>
</ul>
<p><strong>Implementation</strong>:</p>
<ul>
<li><p>Configure read replicas for MySQL and MongoDB in all AZs.</p>
</li>
<li><p>Update the application to route read queries to the nearest replica.</p>
</li>
</ul>
<p><strong>Benefits</strong>:</p>
<ul>
<li><p>Distributed read traffic.</p>
</li>
<li><p>Minimized cross-AZ reads for applications.</p>
</li>
</ul>
<h4 id="heading-4-monitor-and-analyze-traffic">4. Monitor and Analyze Traffic</h4>
<ul>
<li><strong>Objective</strong>: Identify high-cost traffic patterns and optimize them.</li>
</ul>
<p><strong>Implementation</strong>:</p>
<ul>
<li><p>Use tools like <strong>eBPF</strong> (via Cilium or similar solutions) to monitor pod-level traffic.</p>
</li>
<li><p>Visualize traffic patterns with Prometheus and Grafana.</p>
</li>
</ul>
<p><strong>Benefits</strong>:</p>
<ul>
<li><p>Insights into cross-AZ traffic contributors.</p>
</li>
<li><p>Better-informed optimization decisions.</p>
</li>
</ul>
<h4 id="heading-5-auto-scaling-with-reserved-instances">5. Auto-Scaling with Reserved Instances</h4>
<ul>
<li><p>Used <strong>Reserved Instances</strong> for predictable workloads to reduce EC2 costs.</p>
</li>
<li><p>For dynamic scaling, relied on <strong>Spot Instances</strong> with fallback to On-Demand Instances to balance cost and availability.</p>
</li>
</ul>
<h4 id="heading-6-monitoring-and-traffic-analysis">6. Monitoring and Traffic Analysis</h4>
<ul>
<li><p>Deployed tools like <strong>Prometheus</strong> and <strong>Grafana</strong> to monitor cross-AZ traffic patterns.</p>
</li>
<li><p>Used <strong>eBPF-based tools</strong> to analyze network flows and identify expensive traffic routes.</p>
</li>
<li><p>Optimized Node.js and PHP query patterns based on insights.</p>
</li>
</ul>
<h4 id="heading-7-efficient-ebs-volume-management">7. Efficient EBS Volume Management</h4>
<ul>
<li><p>Migrated to <strong>GP3 volumes</strong> for better cost-performance balance.</p>
</li>
<li><p>Reduced snapshot frequency to save on storage costs while maintaining adequate backup intervals.</p>
</li>
</ul>
<h3 id="heading-implementation-example-topology-aware-routing">Implementation Example: Topology-Aware Routing</h3>
<p>Here’s a quick implementation of topology-aware routing for a MySQL Stateful Set:</p>
<h4 id="heading-1-annotate-the-kubernetes-service"><strong>1. Annotate the Kubernetes Service</strong>:</h4>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">mysql</span>
  <span class="hljs-attr">annotations:</span>
    <span class="hljs-attr">topology.kubernetes.io/zone:</span> <span class="hljs-string">"true"</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">mysql</span>
  <span class="hljs-attr">ports:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">protocol:</span> <span class="hljs-string">TCP</span>
      <span class="hljs-attr">port:</span> <span class="hljs-number">3306</span>
</code></pre>
<h4 id="heading-2-deploy-pods-with-node-affinity"><strong>2. Deploy Pods with Node Affinity</strong>:</h4>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">StatefulSet</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">mysql</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">template:</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">affinity:</span>
        <span class="hljs-attr">nodeAffinity:</span>
          <span class="hljs-attr">requiredDuringSchedulingIgnoredDuringExecution:</span>
            <span class="hljs-attr">nodeSelectorTerms:</span>
              <span class="hljs-bullet">-</span> <span class="hljs-attr">matchExpressions:</span>
                  <span class="hljs-bullet">-</span> <span class="hljs-attr">key:</span> <span class="hljs-string">topology.kubernetes.io/zone</span>
                    <span class="hljs-attr">operator:</span> <span class="hljs-string">In</span>
                    <span class="hljs-attr">values:</span>
                      <span class="hljs-bullet">-</span> <span class="hljs-string">us-east-1a</span>
</code></pre>
<h4 id="heading-3-migrate-architecture-to-2-azs"><strong>3. Migrate Architecture to 2 AZs</strong>:</h4>
<p>As part of our optimization strategy, we decided to reduce the deployment replicas from <strong>three AZs to two AZs</strong>. This architectural shift was motivated by the need to:</p>
<ul>
<li><p><strong>Reduce cross-AZ networking costs</strong>: With three AZs, application pods, and database instances often communicated across zones, leading to significant cross-AZ data transfer charges.</p>
</li>
<li><p><strong>Simplify operational overhead</strong>: Managing resources across three AZs introduced complexity in terms of monitoring, scaling, and maintaining consistency.</p>
</li>
</ul>
<h4 id="heading-steps-taken-during-migration">Steps Taken During Migration:</h4>
<p><strong>Align Backend Applications and Databases in the Same AZs</strong>:</p>
<ul>
<li><p>We grouped our <strong>Node.js and PHP application pods</strong> and their associated <strong>Redis, MySQL, and MongoDB databases</strong> within the same AZ.</p>
</li>
<li><p>This ensured that most traffic between application pods and databases remained within a single AZ, minimizing cross-AZ traffic.</p>
</li>
</ul>
<p><strong>Reconfigure StatefulSets for Zone Awareness</strong>:</p>
<ul>
<li><p>The StatefulSets for MySQL and MongoDB were reconfigured with <strong>node affinity rules</strong> to limit pod placement to the chosen two AZs.</p>
</li>
<li><p>For example, the affinity configuration ensured MySQL replicas in <code>us-east-1a</code> were primarily accessed by application pods running in <code>us-east-1a</code>.</p>
</li>
</ul>
<p><strong>Redistribute Traffic</strong>:</p>
<ul>
<li><p>Traffic patterns were adjusted to route requests to backend pods residing in the same AZ as the database.</p>
</li>
<li><p>This was achieved using <strong>Service Topology</strong> and DNS resolution to prioritize local AZ instances.</p>
</li>
</ul>
<p><strong>Leverage Read Replicas for High Availability</strong>:</p>
<ul>
<li><p>To maintain availability and fault tolerance after reducing AZs, <strong>read replicas</strong> were deployed in both AZs.</p>
</li>
<li><p>This setup ensured that each AZ could independently handle read-heavy traffic in case of a single AZ failure.</p>
</li>
</ul>
<h4 id="heading-benefits-achieved"><strong>Benefits Achieved:</strong></h4>
<p><strong>1. Significant Reduction in Networking Costs</strong>:</p>
<ul>
<li>By ensuring backend pods and database instances resided in the same AZ, we reduced <strong>cross-AZ data transfer</strong> by over 60%. The remaining cross-AZ traffic was primarily for write replication between database nodes.</li>
</ul>
<p><strong>2. Improved Application Performance</strong>:</p>
<ul>
<li>Latency decreased as the traffic between application pods and databases no longer crossed AZ boundaries, leading to faster query execution.</li>
</ul>
<p><strong>3. Optimized Resource Utilization</strong>:</p>
<ul>
<li>Consolidating resources across two AZs allowed us to better utilize EC2 instances and scale horizontally within those zones without over-provisioning.</li>
</ul>
<p><strong>4. Resilient Architecture</strong>:</p>
<ul>
<li>Despite reducing to two AZs, the use of read replicas and distributed services ensured that the application could still handle AZ failures effectively.</li>
</ul>
<p><strong>Key Considerations:</strong></p>
<ul>
<li><p><strong>Trade-off Between Availability and Cost</strong>: While migrating to two AZs reduced costs, we carefully analyzed the risk of reduced fault tolerance compared to a three-AZ setup. For our workload, the benefits outweighed the risks, as our architecture still adhered to AWS’s high-availability guidelines.</p>
</li>
<li><p><strong>Database Write Replication Traffic</strong>: Since database write operations still required cross-AZ replication to maintain consistency, we evaluated and optimized replication intervals and volumes.</p>
</li>
</ul>
<h4 id="heading-verify-traffic-patterns"><strong>Verify Traffic Patterns</strong>:</h4>
<p>After migration, we used <strong>Prometheus</strong> and <strong>Grafana</strong> to monitor key metrics such as:</p>
<ul>
<li><p><code>network_tx_bytes</code>: Measured overall traffic between pods and identified any lingering cross-AZ traffic.</p>
</li>
<li><p><code>request_latency</code>: Ensured that application response times improved post-migration.</p>
</li>
<li><p><strong>Custom Dashboards</strong>: Created dashboards to visualize traffic patterns, helping us confirm that most communication was now AZ-local.</p>
</li>
</ul>
<p>This migration not only helped reduce costs but also streamlined our operational overhead while ensuring performance and reliability. By taking a systematic approach to reduce the number of AZs and optimize resource placement, we were able to maintain the high availability of our workloads without incurring unnecessary expenses.</p>
<p>Let me know if you’d like further refinements!</p>
<h3 id="heading-results">Results</h3>
<p>After implementing these solutions:</p>
<ul>
<li><p><strong>Cross-AZ traffic dropped by 40%</strong>, significantly lowering costs.</p>
</li>
<li><p>Application response times improved by 20% due to reduced latency.</p>
</li>
<li><p>The caching layer offloaded 60% of read-heavy operations from the databases.</p>
</li>
</ul>
<h3 id="heading-summary">Summary</h3>
<p>In this article, we tackled the challenges of managing cross-AZ traffic costs in an AWS EKS cluster. By implementing topology-aware routing, caching frequently accessed data, and optimizing database configurations, we reduced networking costs and improved application performance.</p>
<p>Monitoring tools like eBPF, Prometheus, and Grafana played a crucial role in identifying and resolving traffic inefficiencies. These strategies can be a starting point for optimizing Kubernetes workloads, especially in multi-AZ deployments.</p>
<p>Developing scalable backend solutions requires a deep understanding of cloud-native architectures and DevOps best practices. Furthermore, it demands composite expertise, robust techniques, and a collaborative approach with a <a target="_blank" href="https://www.yudiz.com/devops-consulting/"><strong>Leading DevOps service provider</strong></a>.</p>
<p>By partnering with experienced DevOps engineers, you can access a comprehensive suite of cloud and backend development services, leverage agile methodologies, and implement advanced optimization techniques tailored to your business needs.</p>
<blockquote>
<p><strong>If you’re facing similar challenges or have any questions, feel free to share your experience in the comments!</strong></p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[A Guide to Understanding Availability Zones, Edge Locations, and Data Centers]]></title><description><![CDATA[Amazon Web Services (AWS) comes to mind when you think about cloud computing. AWS takes care of all the scenarios of how we deploy and manage applications, offering flexibility, scalability, and reliability. But what makes AWS so powerful? It’s all i...]]></description><link>https://hashnode.ravikyada.in/understanding-availability-zones-edge-locations-and-data-centers</link><guid isPermaLink="true">https://hashnode.ravikyada.in/understanding-availability-zones-edge-locations-and-data-centers</guid><category><![CDATA[AWS]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[edge computing]]></category><dc:creator><![CDATA[Ravi Kyada]]></dc:creator><pubDate>Mon, 05 Aug 2024 07:57:49 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1742281098338/edfc963a-0086-49d6-b8fb-118f2512543c.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Amazon Web Services (AWS) comes to mind when you think about cloud computing. AWS takes care of all the scenarios of how we deploy and manage applications, offering flexibility, scalability, and reliability. But what makes AWS so powerful? It’s all in the infrastructure.</p>
<h3 id="heading-why-understanding-aws-infrastructure-is-important">Why Understanding AWS Infrastructure is Important</h3>
<p>by being aware of the basic elements of AWS businesses can fully utilize AWS infrastructure, It guarantees cost-effectiveness, improved security, and peak performance. Let’s take a closer look at the elements that, when combined, can provide effective architecture.</p>
<h3 id="heading-regions-in-aws-definition-and-importance">Regions in AWS: Definition and Importance</h3>
<p>AWS Regions are large geographical areas that cover multiple Availability Zones. Think of a Region as a continent-sized area, providing a high-level physical separation from other Regions to ensure fault tolerance and stability.</p>
<h4 id="heading-how-regions-works">How Regions Works:</h4>
<p>Each Region operates independently, allowing you to place resources closer to your end users, which reduces latency and improves performance.</p>
<p>For instance, hosting your application in the US-East-1 Region will provide faster access for users in North America compared to users in Asia.</p>
<h4 id="heading-selecting-the-right-region">Selecting the Right Region</h4>
<p>Choosing the right Region is critical. Consider factors like proximity to your users, compliance with local regulations, and available services. AWS provides a Region selection tool to help you decide the best location for your needs.</p>
<p>AWS Provides Amazing Handwritten Articles to suggest <a target="_blank" href="https://aws.amazon.com/blogs/architecture/what-to-consider-when-selecting-a-region-for-your-workloads/">Aspects to consider while choosing a Region for your Workload.</a></p>
<h3 id="heading-what-are-availability-zones">What are Availability Zones?</h3>
<p>Within each AWS Region are multiple Availability Zones. An AZ is essentially a distinct data center with independent power, cooling, and networking. They are designed to be isolated from failures in other AZs.</p>
<h4 id="heading-benefits-of-availability-zones">Benefits of Availability Zones</h4>
<p>AZs provide fault tolerance by distributing resources across different physical locations. If one AZ goes down, the others continue operating, ensuring your applications remain available.</p>
<h4 id="heading-how-azs-enhance-resilience-and-redundancy">How AZs Enhance Resilience and Redundancy</h4>
<p>By deploying your applications across multiple AZs, you create a highly available architecture. This setup protects against data loss and service interruptions, as each AZ is designed to be resilient against failures.</p>
<h3 id="heading-data-centers-the-backbone-of-aws">Data Centers: The Backbone of AWS</h3>
<p>Data centers are the physical facilities where all our AWS infrastructure resides. They contain the servers and networking equipment necessary to run your applications and store your data.</p>
<p>AWS data centers are facilities designed for high availability and security. They are equipped with redundant power supplies, advanced cooling systems, and robust network connectivity.</p>
<h4 id="heading-security-measures-in-aws-data-centers">Security Measures in AWS Data Centers</h4>
<p>Security is paramount in AWS data centers. They employ multiple layers of physical and logical security measures, including biometric access controls, surveillance systems, and regular security audits.</p>
<h3 id="heading-edge-locations-definition-and-purpose">Edge Locations: Definition and Purpose</h3>
<p>Edge Locations are AWS data centers designed to cache content closer to your users, reducing latency and improving performance. They are part of AWS’s content delivery network (CDN) known as Amazon CloudFront.</p>
<h4 id="heading-edge-locations-vs-availability-zones">Edge Locations vs. Availability Zones</h4>
<p>While AZs focus on resilience and redundancy, Edge Locations aim to deliver content quickly. They are strategically placed in major cities worldwide to ensure users get the fastest access to your content.</p>
<h4 id="heading-enhancing-user-experience-with-edge-locations">Enhancing User Experience with Edge Locations</h4>
<p>Edge Locations cache static content, such as images and videos, and dynamically accelerate APIs and other web services. This results in faster load times and a smoother user experience.</p>
<h3 id="heading-global-infrastructure-how-aws-connects-the-world">Global Infrastructure: How AWS Connects the World</h3>
<p>AWS’s global infrastructure spans across multiple Regions and AZs, connected by a high-speed, low-latency private network. This vast network ensures that your applications can scale globally with minimal latency.</p>
<p>Network latency is the delay before a transfer of data begins following an instruction for its transfer. AWS minimizes latency by using Edge Locations and strategically placing data centers near major internet hubs.</p>
<h3 id="heading-real-world-examples-of-aws-global-reach">Real-World Examples of AWS Global Reach</h3>
<p>Companies like Netflix and Airbnb use AWS’s global infrastructure to serve millions of users worldwide. AWS enables these companies to deliver seamless and fast services regardless of user location.</p>
<h4 id="heading-scalability-and-flexibility-scaling-resources-with-aws">Scalability and Flexibility: Scaling Resources with AWS</h4>
<p>AWS allows you to scale your resources up or down based on demand. This elasticity ensures you only pay for what you use, making it a cost-effective solution for businesses of all sizes.</p>
<h4 id="heading-flexibility-in-deploying-applications">Flexibility in Deploying Applications</h4>
<p>With AWS, you can deploy applications in various environments, whether on-premises, hybrid, or fully cloud-based. This flexibility helps you adapt to changing business needs and technology advancements.</p>
<h3 id="heading-security-and-compliance-security-features-of-aws">Security and Compliance: Security Features of AWS</h3>
<p>AWS provides a robust security framework, including identity and access management (IAM), encryption, and network firewalls. These features ensure your data and applications are secure.</p>
<h4 id="heading-compliance-standards">Compliance Standards</h4>
<p>AWS meets numerous compliance standards, such as GDPR, HIPAA, and SOC 2, making it suitable for industries with stringent regulatory requirements.</p>
<h4 id="heading-ensuring-data-privacy">Ensuring Data Privacy</h4>
<p>AWS implements strict data privacy measures, including data encryption at rest and in transit, and allows you to manage your own encryption keys.</p>
<h3 id="heading-case-studies-major-companies-using-aws">Case Studies: Major Companies Using AWS</h3>
<p>Companies like Capital One, NASA, and General Electric rely on AWS for their cloud infrastructure. These organizations benefit from AWS’s scalability, reliability, and global reach.</p>
<h4 id="heading-success-stories-and-lessons-learned">Success Stories and Lessons Learned</h4>
<p>Capital One, for instance, leveraged AWS to enhance its security posture and innovate rapidly. NASA uses AWS to store and analyze vast amounts of data from space missions, demonstrating AWS’s ability to handle large-scale projects.</p>
<h3 id="heading-choosing-the-right-aws-components">Choosing the Right AWS Components</h3>
<h4 id="heading-factors-to-consider">Factors to Consider</h4>
<p>When choosing AWS components, consider your application’s needs, such as performance, availability, and cost. AWS offers tools like the <a target="_blank" href="https://docs.aws.amazon.com/wellarchitected/latest/framework/welcome.html">Well-Architected Framework</a> to guide your decision-making process.</p>
<h4 id="heading-balancing-cost-and-performance">Balancing Cost and Performance</h4>
<p>Optimize your architecture by balancing cost and performance. Use cost management tools provided by AWS to monitor and adjust your spending.</p>
<p>Follow best practices, such as automating deployments, implementing proper security measures, and regularly reviewing your architecture to ensure it meets your business goals.</p>
<h3 id="heading-future-of-aws-infrastructure">Future of AWS Infrastructure</h3>
<h4 id="heading-emerging-trends">Emerging Trends</h4>
<p>AWS continues to innovate with new services and features. Trends like machine learning, IoT, and serverless computing are becoming increasingly prominent.</p>
<h4 id="heading-predictions-for-the-next-decade">Predictions for the Next Decade</h4>
<p>Over the next decade, expect AWS to further enhance its infrastructure with more Regions, AZs, and Edge Locations. These advancements will continue to drive down latency and improve global accessibility.</p>
<h3 id="heading-migration-to-aws-steps-for-a-smooth-transition">Migration to AWS: Steps for a Smooth Transition</h3>
<p>Migrating to AWS involves several steps, including assessment, planning, migration, and optimization. AWS provides tools like AWS Migration Hub to simplify the process.</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>Understanding AWS infrastructure is crucial for leveraging its full potential. By comprehending Regions, AZs, Data Centers, and Edge Locations, you can build resilient, scalable, and efficient applications.</p>
<h3 id="heading-final-thoughts">Final Thoughts</h3>
<p>AWS’s robust global infrastructure offers unparalleled opportunities for businesses to innovate and grow. As AWS continues to expand and evolve, staying informed about its components and best practices will ensure you remain competitive in the cloud era.</p>
<h3 id="heading-faq">FAQ</h3>
<h4 id="heading-what-are-availability-zones-in-aws">What are Availability Zones in AWS?</h4>
<p>Availability Zones (AZs) are distinct data centers within an AWS Region that provide fault tolerance and high availability. They are designed to be isolated from failures in other AZs.</p>
<h4 id="heading-how-do-edge-locations-improve-performance">How do Edge Locations Improve Performance?</h4>
<p>Edge Locations cache content closer to users, reducing latency and ensuring faster load times for websites and applications. They are part of AWS’s content delivery network (CDN).</p>
<h4 id="heading-what-is-the-difference-between-regions-and-availability-zones">What is the Difference Between Regions and Availability Zones?</h4>
<p>Regions are large geographical areas containing multiple Availability Zones. AZs are isolated data centers within a Region that provide redundancy and fault tolerance.</p>
<h4 id="heading-how-secure-are-aws-data-centers">How Secure are AWS Data Centers?</h4>
<p>AWS data centers employ multiple layers of security, including physical controls like biometric access, and logical controls such as encryption and network firewalls. They undergo regular security audits to ensure compliance.</p>
<h4 id="heading-can-small-businesses-benefit-from-aws">Can Small Businesses Benefit from AWS?</h4>
<p>Yes, AWS offers scalable and cost-effective solutions that are ideal for small businesses. Services like AWS Lightsail provide easy-to-use cloud resources tailored for small-scale applications.</p>
<p>Thank you so much for Reading the Article till the End! 🙌🏻 Your time and interest truly mean a lot 😁📃.</p>
<p>If you have any questions or thoughts about this blog, feel free to connect with me:</p>
<p>LinkedIn: <a target="_blank" href="https://www.linkedin.com/in/ravikyada">https://www.linkedin.com/in/ravikyada</a></p>
<p>Twitter: <a target="_blank" href="https://twitter.com/ravijkyada">https://twitter.com/ravijkyada</a></p>
<p>Until next time, Cheers to more learning and discovery✌🏻!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722577956967/a2b6c268-0839-4678-ad31-abb038c36c9d.gif" alt /></p>
<hr />
<p><a target="_blank" href="https://infiq.ravikyada.in/a-guide-to-understanding-availability-zones-edge-locations-and-data-centers-f84cba3b7af7">A Guide to Understanding Availability Zones, Edge Locations, and Data Centers</a> was originally published in <a target="_blank" href="https://infiq.ravikyada.in">InfiQ Technologies</a> on Medium.</p>
]]></content:encoded></item><item><title><![CDATA[Kubernetes Services vs. Ingress: A Beginner’s Guide]]></title><description><![CDATA[Today, we will guide you through one of the essential aspects: Kubernetes Services and Ingress.
Kubernetes Services, which allows network access to Pods managed within deployments. We’ll explore the various Kubernetes Services types and touch upon Ku...]]></description><link>https://hashnode.ravikyada.in/kubernetes-services-vs-ingress-a-beginners-guide</link><guid isPermaLink="true">https://hashnode.ravikyada.in/kubernetes-services-vs-ingress-a-beginners-guide</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[kubernetes ingress]]></category><dc:creator><![CDATA[Ravi Kyada]]></dc:creator><pubDate>Fri, 08 Mar 2024 08:07:34 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1713155687617/2be115ca-711e-4062-b54e-09e480c0a395.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Today, we will guide you through one of the essential aspects: Kubernetes Services and Ingress.</p>
<p>Kubernetes Services, which allows network access to Pods managed within deployments. We’ll explore the various Kubernetes Services types and touch upon Kubernetes Ingress.</p>
<p>Unlike a service, Ingress isn’t a service but offers an alternative method for directing traffic to your services and cluster.</p>
<p>we will go through all services and ingress and provide examples of deploying an application both without using Ingress and with using Ingress to help everyone understand the differences and benefits of Ingress.</p>
<h3 id="heading-understanding-kubernetes-services">Understanding Kubernetes Services</h3>
<h4 id="heading-what-are-they-exactly">What Are They Exactly?</h4>
<p>Kubernetes Services act like traffic managers within your cluster, directing requests to the right destinations.</p>
<p>In Kubernetes, a service serves as a stable access point for pods offering the same functionality. Think of it as a consistent address and port combination that remains unchanged as long as the service runs.</p>
<p>The service automatically routes incoming connections to one of the pods, even if the pod’s location changes within the cluster.</p>
<p>This simplifies management, as pods can be moved or scaled without disrupting client access. Kubernetes services support load balancing, ensuring efficient distribution of client requests among the available pods.</p>
<p>Kubernetes Services are endpoints or interfaces of pods (or groups of pods) that perform the same function. In simple terms, they act as gates, but only as entrances.</p>
<p>Creating a service is like creating a gate for traffic from outside to easily communicate with the pod that owns the gate.</p>
<h4 id="heading-types-of-kubernetes-services">Types of Kubernetes Services:</h4>
<p>There are a few types of Kubernetes Services, each serving a different purpose. Whether it’s ClusterIP for internal communication or NodePort/LoadBalancer for external access, there’s a service for every need.</p>
<h4 id="heading-1-clusterip">1. ClusterIP:</h4>
<p>Suitable for exposing services only internally, such as databases, caches, or message queues (no one outside can talk to them).</p>
<p>When we create ClusterIPa service, Kubernetes creates a virtual IP for that service by pulling the IP from the reserved pool, allowing pods or other services within the cluster to access the service through that IP.</p>
<h4 id="heading-2-nodeport">2. NodePort:</h4>
<p>Suitable for exposing services for testing or debugging during app development.</p>
<p>When we create NodePorta service, Kubernetes reserves one port on every node in the cluster (usually port 30000–32767), and traffic sent via node IP + port is forwarded to that service. It can be the IP of any node.</p>
<p>such as</p>
<ul>
<li><p>[Node1] IP XXXX Port 8080 will forward to Apache Server.</p>
</li>
<li><p>[Node2] IP YYYY Port 3000 will be forwarded to Nodejs backend as well.</p>
</li>
<li><p>[Node1] IP XXXX Port 3001 will be forwarded to the product Server.</p>
</li>
</ul>
<h4 id="heading-3-loadbalancer">3. LoadBalancer:</h4>
<p>When we create LoadBalancera service, Kubernetes creates the actual load balancer outside of the cluster (if using the cloud, it is created on the cloud platform), allowing external users to access our service through the load balancer IP.</p>
<p>LoadBalancerservice is suitable for use to expose services in production that require access from outside Kubernetes or from the public internet, such as a simple web app that is not complicated.</p>
<blockquote>
<p><em>In fact, Kubernetes also has another type of service that may not be talked about in general.ExternalNamewhich I would like not to go into details about. Let's just say it uses a map service to the DNS name (redirection) and doesn't do load balancing in any way.</em></p>
</blockquote>
<h3 id="heading-exploring-kubernetes-services">Exploring Kubernetes Services</h3>
<h4 id="heading-how-do-they-work">How Do They Work?</h4>
<p>Picture a busy highway road with cars and trucks around the road. Kubernetes Services ensure that each car (or pod) reaches its destination safely, without crashing into others.</p>
<h4 id="heading-where-can-you-use-them">Where Can You Use Them?</h4>
<p>Kubernetes Services are handy in various scenarios, from microservices architectures to monolithic applications. They keep everything connected and running smoothly behind the scenes.</p>
<hr />
<h3 id="heading-demo-1-try-to-deploy-the-app-using-only-services">[Demo-1] Try to Deploy the App using only Services.</h3>
<p>Suppose I have an infiq app that consists of 4 parts:</p>
<ol>
<li><p>Frontend</p>
</li>
<li><p>Admin Panel</p>
</li>
<li><p>Backend</p>
</li>
<li><p>PostgreSQL Database</p>
</li>
</ol>
<p>If I want to deploy this app with Kubernetes services, I might choose the following service:</p>
<ul>
<li><p>Frontend service is used LoadBalancerbecause users from outside will call in.</p>
</li>
<li><p>Admin service is used LoadBalancerbecause users from outside(Other Team) will call in.</p>
</li>
<li><p>Backend service is used LoadBalancerbecause users from outside will call into the backend service.</p>
</li>
<li><p>Database service is used ClusterIPbecause it is called from an internal service itself.</p>
</li>
</ul>
<p>And I need to create a record from an external DNS like this.</p>
<ul>
<li><p>infiq.tech Points to the load balancer IP of the frontend service.</p>
</li>
<li><p>admin.infiq.tech Points to the load balancer IP of the Admin service.</p>
</li>
<li><p>backend.infiq.tech Points to the load balancer IP of the Backend service.</p>
</li>
</ul>
<p>It works fine, but the problem is…</p>
<ol>
<li><p><strong>Wasteful:</strong> to run this application in production I have to pay for 3 load balancers.</p>
</li>
<li><p><strong>Complicated:</strong> we must must manage 3 DNS records</p>
</li>
<li><p><strong>There are limitations:</strong> URL path routing cannot be done. Must use a subdomain instead. For example:- infiq.tech/profilecannot be used → must be used with their subdomains.</p>
</li>
<li><p><strong>(Maybe) Chaotic</strong>: If the load balancer IP changes, you have to change the DNS record (except on the cloud where we usually point DNS to the load balancer DNS name).</p>
</li>
</ol>
<blockquote>
<p><em>Actually, regarding URL path routing, we can deploy a proxy to do the same. It reduces the hassle of DNS records as well, but it has to take care of the proxy and still wastes load balancer fees.</em></p>
</blockquote>
<p>And let’s try to see what Kubernetes Ingress can help us with.</p>
<hr />
<h3 id="heading-getting-to-know-kubernetes-ingress">Getting to Know Kubernetes Ingress</h3>
<h4 id="heading-breaking-down-the-basics">Breaking Down the Basics</h4>
<p>Think of Kubernetes Ingress as the gatekeeper to your cluster. It manages incoming traffic, allowing external users to access your applications securely.</p>
<h4 id="heading-how-ingress-plays-its-part">How Ingress Plays Its Part</h4>
<p>Ingress works its magic by routing requests based on rules you define. It’s like having a personal concierge who ensures that each visitor gets to their desired destination without getting lost.</p>
<p><strong>Kubernetes Ingress</strong> is a traffic controller in a Kubernetes cluster that is inserted in front of other services. It receives traffic from the load balancer and sends traffic to various services according to routing rules.</p>
<p><strong>To use Ingress, there must be 2 parts:</strong></p>
<ol>
<li><p><strong>Ingress Controller</strong>, for which we need to deploy a controller (such as NGINX, alb-controller, Traefik, etc.) first to work as a proxy for us.</p>
</li>
<li><p><strong>Ingress Resource</strong> is to configure the controller to work as we want. This one is a Kubernetes resource like pods or deployments.</p>
</li>
</ol>
<hr />
<h3 id="heading-demo-2-try-deploying-the-app-using-ingress-to-help">[Demo-2] Try Deploying the App using Ingress to help.</h3>
<p>we will change all 3 services (frontend, admin, backend) from LoadBalancerto ClusterIPall because I will use only one load balancer of ingress to expose outside the cluster.</p>
<p>and DNS will have only 1 record left pointing to the load balancer IP of the ingress.</p>
<p>And this is what I got after bringing ingress to this Application.</p>
<ol>
<li><p>From having to use 3 load balancers, there is only 1 left.</p>
</li>
<li><p>Manage a single DNS record pointing to ingress.</p>
</li>
<li><p>Can do routing from URL path such as<br /> - infiq.tech→ frontend service<br /> - admin.infiq.tech/users→ Admin service<br /> - infiq.tech/profile→ Frontend service with path-based routes</p>
</li>
<li><p>Configuration regarding URL routing or TLS can be done in ingress and stored as code (Kubernetes manifest) with other manifests in one place.</p>
</li>
</ol>
<h3 id="heading-unveiling-the-differences">Unveiling the Differences</h3>
<h4 id="heading-what-sets-services-and-ingress-apart">What Sets Services and Ingress Apart?</h4>
<p>Services focus on internal communication, while Ingress handles external access. It’s like comparing an internal phone system (Services) to a reception desk (Ingress) that welcomes external visitors.</p>
<h4 id="heading-choosing-the-right-tool">Choosing the Right Tool</h4>
<p>Deciding between Services and Ingress depends on your application’s needs. If you’re dealing with internal communication, go for Services. For external access, Ingress is your go-to.</p>
<hr />
<h3 id="heading-frequently-asked-questions">Frequently Asked Questions</h3>
<h4 id="heading-whats-the-main-difference-between-kubernetes-services-and-ingress">What’s the main difference between Kubernetes Services and Ingress?</h4>
<p>Services handle internal communication within your cluster, while Ingress manages external access to your services.</p>
<h4 id="heading-can-you-use-kubernetes-services-and-ingress-together">Can you use Kubernetes Services and Ingress together?</h4>
<p>Absolutely! Services and Ingress complement each other, working together to ensure seamless communication both inside and outside your cluster.</p>
<h4 id="heading-how-do-i-decide-which-to-use-for-my-application">How do I decide which to use for my application?</h4>
<p>Consider your application’s needs. If you’re dealing with internal communication between pods, go for Services. If you need to manage external access, Ingress is your go-to.</p>
<h4 id="heading-does-using-kubernetes-ingress-affect-performance">Does using Kubernetes Ingress affect performance?</h4>
<p>When configured properly, the performance impact of Ingress is minimal. However, inefficient routing or misconfiguration can lead to performance issues.</p>
<h4 id="heading-how-does-kubernetes-handle-complex-deployments-with-both-services-and-ingress">How does Kubernetes handle complex deployments with both Services and Ingress?</h4>
<p>Kubernetes is designed to handle complex deployments seamlessly. By carefully configuring your Services and Ingress resources, you can ensure smooth traffic routing and management in even the most intricate setups.</p>
<p>Thank you so much for Reading the Article till the End! 🙌🏻 Your time and interest truly mean a lot 😁📃.</p>
<p>If you have any questions or thoughts about this blog, feel free to connect with me:</p>
<p>LinkedIn: <a target="_blank" href="https://www.linkedin.com/in/ravikyada">https://www.linkedin.com/in/ravikyada</a></p>
<p>Twitter: <a target="_blank" href="https://twitter.com/ravijkyada">https://twitter.com/ravijkyada</a></p>
<p>Until next time, Cheers to more learning and discovery✌🏻!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713155686492/2267a3d6-1967-4072-9be1-8916252fc4bf.gif" alt /></p>
<hr />
<p><a target="_blank" href="https://infiq.ravikyada.in/kubernetes-services-vs-ingress-a-beginners-guide-9c8d627c943f">Kubernetes Services vs. Ingress: A Beginner’s Guide</a> was originally published in <a target="_blank" href="https://infiq.ravikyada.in">InfiQ Technologies</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>
]]></content:encoded></item><item><title><![CDATA[Troubleshooting CORS Errors: How to Resolve CORS API Connection Issues]]></title><description><![CDATA[· Understanding CORS Errors:∘ Access-Control-Allow-Origin:∘ Preflight CORS request failed:· Causes of CORS Errors:· Resolving CORS Errors:∘ Identify the cause:∘ Configure CORS headers:∘ Check origin matching:∘ Secure SSL certificates:∘ Test and monit...]]></description><link>https://hashnode.ravikyada.in/troubleshooting-cors-errors-how-to-resolve-cors-api-connection-issues</link><guid isPermaLink="true">https://hashnode.ravikyada.in/troubleshooting-cors-errors-how-to-resolve-cors-api-connection-issues</guid><category><![CDATA[Backend Development]]></category><category><![CDATA[best practices]]></category><category><![CDATA[Devops]]></category><category><![CDATA[troubleshooting]]></category><category><![CDATA[Web Development]]></category><dc:creator><![CDATA[Ravi Kyada]]></dc:creator><pubDate>Tue, 23 Jan 2024 07:28:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1713155695721/d9a5ba04-883f-41ac-9c45-295193beed87.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713155693011/a0b2b62d-1406-4f2c-b263-dc00840c319b.png" alt /></p>
<p>· <a class="post-section-overview" href="#9b3b">Understanding CORS Errors:</a><br />∘ <a class="post-section-overview" href="#1fac">Access-Control-Allow-Origin:</a><br />∘ <a class="post-section-overview" href="#6fbc">Preflight CORS request failed:</a><br />· <a class="post-section-overview" href="#1a1d">Causes of CORS Errors:</a><br />· <a class="post-section-overview" href="#ca31">Resolving CORS Errors:</a><br />∘ <a class="post-section-overview" href="#0bd7">Identify the cause:</a><br />∘ <a class="post-section-overview" href="#3f8f">Configure CORS headers:</a><br />∘ <a class="post-section-overview" href="#61cd">Check origin matching:</a><br />∘ <a class="post-section-overview" href="#e285">Secure SSL certificates:</a><br />∘ <a class="post-section-overview" href="#f88d">Test and monitor:</a><br />· <a class="post-section-overview" href="#0ca8">Best practices for avoiding CORS Errors</a><br />· <a class="post-section-overview" href="#b1ba">Troubleshooting Tips</a><br />· <a class="post-section-overview" href="#dad0">Conclusion and Final Thoughts</a></p>
<p>This can often lead to frustration and connectivity issues when trying to establish API connections.</p>
<p>However, understanding how to troubleshoot and resolve CORS errors is essential for successful API integration.</p>
<p>In this blog, we will explore the ins and outs of CORS errors and provide step-by-step guidance on how to resolve them like a pro.</p>
<h3 id="heading-understanding-cors-errors">Understanding CORS Errors:</h3>
<p>CORS errors can occur when a web application running on one domain tries to request an API hosted on a different domain.</p>
<p>These errors are a security mechanism implemented by web browsers to prevent unauthorized access to resources.</p>
<p>Understanding the different types of CORS errors can help in troubleshooting and resolving them effectively.</p>
<h4 id="heading-access-control-allow-origin">Access-Control-Allow-Origin:</h4>
<p>One common type of CORS error is the “Access-Control-Allow-Origin” error. This error occurs when the API being accessed does not include the appropriate CORS headers in its response.</p>
<p>The solution to this error involves configuring the API server to include the “Access-Control-Allow-Origin” header with the appropriate origin or wildcard value.</p>
<h4 id="heading-preflight-cors-request-failed">Preflight CORS request failed:</h4>
<p>Another type of CORS error is the “Preflight CORS request failed” error. This error occurs when the web browser is making a preflight request to the API to determine if the actual request is safe to send.</p>
<p>The solution to this error involves ensuring that the API server responds correctly to the preflight requests by including the necessary CORS headers.</p>
<p>By understanding the different types of CORS errors and their solutions, developers can troubleshoot and resolve API connection issues more efficiently.</p>
<p>In the next section, we will explore the step-by-step process of troubleshooting and resolving CORS errors with Best Practices. Stay tuned to learn the best practices for resolving CORS errors and ensuring seamless API integration.</p>
<p><a target="_blank" href="https://infiq.ravikyada.in/understanding-linux-directory-structure-dd31c042d02d">Understanding Linux Directory Structure.</a></p>
<h3 id="heading-causes-of-cors-errors">Causes of CORS Errors:</h3>
<p>CORS errors can occur due to a variety of factors. Understanding these causes can help developers pinpoint the source of the issue and resolve it effectively.</p>
<p>One common cause of CORS errors is misconfigured CORS headers on the API server. If the necessary CORS headers, such as “Access-Control-Allow-Origin” or “Access-Control-Allow-Methods”, are not included in the API’s response, the browser will block the request due to security concerns.</p>
<p>Another cause of CORS errors is mismatched origins. The “Access-Control-Allow-Origin” header specifies the allowed origins that can access the API. If the requesting domain does not match the specified origin(s), the browser will reject the request.</p>
<p>Additionally, CORS errors can be triggered by invalid SSL certificates. Browsers consider SSL certificates as a crucial security measure. If the certificate is missing or expired, the browser may block the request.</p>
<p>In the following section, we will delve into the step-by-step process of troubleshooting and resolving CORS errors effectively and professionally. Stay tuned to learn the best practices for overcoming these challenges and ensuring smooth API integration.</p>
<h3 id="heading-resolving-cors-errors">Resolving CORS Errors:</h3>
<p>Resolving CORS errors requires a systematic approach that combines technical knowledge and professional best practices.</p>
<p>By following these steps, you can effectively troubleshoot and resolve API connection issues caused by CORS errors.</p>
<h4 id="heading-identify-the-cause">Identify the cause:</h4>
<p>Begin by determining the specific cause of the CORS error. This can be achieved by examining error messages, reviewing server logs, and testing the API connections using different tools and methods. Understanding the root cause is essential for applying the appropriate solution.</p>
<h4 id="heading-configure-cors-headers">Configure CORS headers:</h4>
<p>If the CORS headers on the API server are misconfigured, you need to ensure that the necessary headers, such as “Access-Control-Allow-Origin” and “Access-Control-Allow-Methods,” are correctly set. Consult the API documentation or contact the API provider for guidance on the correct headers to use.</p>
<h4 id="heading-check-origin-matching">Check origin matching:</h4>
<p>Verify that the requesting domain matches the allowed origins specified in the “Access-Control-Allow-Origin” header. If there is a mismatch, update the header to include the correct origin(s).</p>
<h4 id="heading-secure-ssl-certificates">Secure SSL certificates:</h4>
<p>Ensure that your SSL certificates are valid and up to date. If the certificate is missing or expired, renew it or obtain a new one from a trusted certificate authority. Taking this step will prevent browsers from blocking the request due to security concerns.</p>
<h4 id="heading-test-and-monitor">Test and monitor:</h4>
<p>After implementing the necessary changes, thoroughly test your API connections to verify that the CORS errors have been resolved. Monitor the system for any recurring issues and be prepared to make further adjustments if needed.</p>
<p>By approaching CORS error resolution methodically and professionally, you can overcome these challenges and establish reliable API connections. In the next section, we will explore additional tips and best practices to further enhance your troubleshooting skills in handling CORS errors. Stay tuned!</p>
<h3 id="heading-best-practices-for-avoiding-cors-errors">Best practices for avoiding CORS Errors</h3>
<p>While troubleshooting CORS errors is important, it is even better to prevent them from occurring in the first place.</p>
<p>By following these best practices, you can significantly reduce the chances of encountering CORS errors and maintain smooth API connections:</p>
<ol>
<li><strong><em>Implement a robust CORS policy:</em></strong></li>
</ol>
<p>Establish a well-defined CORS policy that specifies which domains are allowed to access your API. By limiting access to only trusted domains, you can minimize the risk of unauthorized requests and potential security vulnerabilities.</p>
<p><strong><em>2. Utilize preflight requests:</em></strong></p>
<p>Preflight requests, also known as “OPTIONS” requests, are sent by the browser to check if a cross-origin request is allowed. By handling preflight requests correctly and providing the necessary headers and permissions, you can avoid many CORS errors.</p>
<p><strong><em>3. Use appropriate HTTP methods:</em></strong></p>
<p>Ensure that you are using the correct HTTP methods for your API requests. CORS errors can occur if you attempt to use a method that is not allowed by the server or if the method is not specified in the “Access-Control-Allow-Methods” header.</p>
<p><strong><em>4. Enable caching:</em></strong></p>
<p>If your API responses don’t frequently change, consider enabling caching. This allows the browser to store the response for a certain period, reducing the number of requests and potential CORS errors.</p>
<p><strong><em>5. Regularly update documentation:</em></strong></p>
<p>Keep your API documentation up to date, including any changes to CORS policy and headers. Clear and accurate documentation will help developers understand how to properly interact with your API and reduce the likelihood of CORS errors.</p>
<p>By incorporating these best practices into your API development process, you can ensure a smooth and error-free cross-origin communication experience. In the next section, we will dive into advanced troubleshooting techniques for handling complex CORS scenarios. Stay tuned!</p>
<h3 id="heading-troubleshooting-tips">Troubleshooting Tips</h3>
<p>Despite following best practices, it’s still possible to encounter CORS errors in more complex scenarios. Here are some troubleshooting tips to help you diagnose and resolve these issues professionally:</p>
<ol>
<li><strong><em>Check browser console logs:</em></strong></li>
</ol>
<p>When a CORS error occurs, the browser console is often the first place to look for more information. Examine the console logs for any error messages related to CORS. This can provide valuable insights into the specific issue at hand.</p>
<p><strong><em>2. Inspect HTTP headers:</em></strong></p>
<p>Review the HTTP headers being sent and received during the API request. Look for any missing or incorrect headers, such as the “Access-Control-Allow-Origin” or “Access-Control-Allow-Headers”. Make sure these headers are present and correctly configured.</p>
<p><strong><em>3. Confirm server-side configuration:</em></strong></p>
<p>Double-check the server-side configuration and ensure that it is properly handling CORS requests. Validate that the server is sending the necessary CORS headers and that the responses are correctly configured.</p>
<p><strong><em>4. Consider CORS proxies:</em></strong></p>
<p>In some cases, using a CORS proxy can help bypass CORS restrictions. These proxies can send requests on behalf of the client, effectively circumventing CORS limitations. However, exercise caution when using proxies and ensure they are only used when necessary.</p>
<p><strong><em>5. Update cross-origin domains:</em></strong></p>
<p>If you’re experiencing CORS errors with specific domains, verify that they are still allowed in your CORS policy. Domains can change or be updated, and it’s important to keep your CORS policy up to date.</p>
<p>By following these troubleshooting tips, you’ll be equipped with the knowledge and tools to troubleshoot and resolve CORS errors effectively. In the final section of this blog series, we will outline additional resources and tools that can aid in CORS error resolution. Stay tuned!</p>
<h3 id="heading-conclusion-and-final-thoughts">Conclusion and Final Thoughts</h3>
<p>In today’s blog post, we explored various troubleshooting tips that can help you resolve CORS errors professionally.</p>
<p>These tips include checking browser console logs for error messages, inspecting HTTP headers, confirming server-side configuration, considering CORS proxies, and updating cross-origin domains.</p>
<p>By following these troubleshooting strategies, you’ll be able to diagnose and fix CORS errors more effectively, ensuring smooth API connections. However, it’s important to remember that each troubleshooting scenario may be unique, and additional methods might be required for complex issues.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713155694575/4d253ffb-f593-4cb1-b0bc-42b7b42981f1.gif" alt /></p>
<hr />
<p><a target="_blank" href="https://infiq.ravikyada.in/troubleshooting-cors-errors-resolve-cors-api-connection-issues-06edd82d39e4">Troubleshooting CORS Errors: How to Resolve CORS API Connection Issues</a> was originally published in <a target="_blank" href="https://infiq.ravikyada.in">InfiQ Technologies</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>
]]></content:encoded></item><item><title><![CDATA[Understanding Linux Directory Structure.]]></title><description><![CDATA[Understanding the Linux Directory Structure is like having a map to guide you through the core of your computer’s operating system.
In this journey, we’re going to break down the directories, revealing their secrets, and helping you make sense of how...]]></description><link>https://hashnode.ravikyada.in/understanding-linux-directory-structure</link><guid isPermaLink="true">https://hashnode.ravikyada.in/understanding-linux-directory-structure</guid><category><![CDATA[Devops]]></category><category><![CDATA[Linux]]></category><category><![CDATA[linux-basics]]></category><dc:creator><![CDATA[Ravi Kyada]]></dc:creator><pubDate>Mon, 22 Jan 2024 10:51:11 GMT</pubDate><enclosure url="https://cdn-images-1.medium.com/max/1024/0*R76ZPz-SwoXdSpZs" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706090492881/5b83f8cb-1e5c-4595-b14c-df59dc8dca59.png" alt /></p>
<p>Understanding the Linux Directory Structure is like having a map to guide you through the core of your computer’s operating system.</p>
<p>In this journey, we’re going to break down the directories, revealing their secrets, and helping you make sense of how everything fits together.</p>
<p>In this journey, we will discuss the directories, unveiling their secrets and the logic behind the structure.</p>
<p><strong>Root Directory: The Foundation of Linux Realms</strong></p>
<p>The / (root) directory is where it all begins, like the foundation of a Whole OS. Let’s explore its depths and understand the core system files and folders that reside within the root directory.</p>
<p><strong>/bin and /sbin: Essential Binaries for System Operations</strong></p>
<p>Dive into the /bin and /sbin directories, where essential binaries reside, performing key functions for system operations. What binaries are crucial, and how do they contribute to the seamless functioning of your Linux system?</p>
<p>Both /bin and /sbin (after it was introduced) used to be on the root partition. /sbin contains the binaries needed only for managing the system (e.g. mount), while /bin contains binaries used by both ordinary users and system administrators (e.g. pwd, cd, ls).</p>
<p>/usr/bin (as well as /usr/sbin, /usr/local/…, opt/…) holds the binaries that are not required to boot up the system.</p>
<p><strong>/etc: Configuration Files Haven for System Customization</strong></p>
<p>Take a stroll through the /etc directory, the haven for configuration files. Understand the importance of this directory and explore common configuration files that shape your Linux environment.</p>
<p><strong>/home: Where Users Call Home</strong></p>
<p>In the /home directory, users find their sanctuary. Uncover the organization of user home directories and how users personalize their space within /home.</p>
<p><strong>/var: Dynamic Data Hub for System Activity</strong></p>
<p>The /var directory is a dynamic hub where variable data is stored. What role does this directory play, and how can you effectively manage the ever-changing data within /var?</p>
<p><strong>/usr: Universal System Resources</strong></p>
<p>/usr directory, a vast expanse of universal system resources including command details, static and shareable file.</p>
<p>/usr directory Contains all commands, libraries, man pages, games, and static files for normal operation.</p>
<p><strong>/lib and /lib64: Chronicles of Shared Libraries</strong></p>
<p>Explore the realms of /lib and /lib64, where shared libraries reside. How do these directories contribute to the efficient functioning of applications and the system as a whole?</p>
<p><strong>/opt: Optional Software Packages for Enhanced Functionality</strong></p>
<p>The /opt directory is a space reserved for optional software packages. What does this mean for your Linux system, and how can you install and manage these optional packages?</p>
<p><strong>/tmp: Temporarily Yours — Managing Temporary Files with /tmp</strong></p>
<p>Unlock the secrets of the /tmp directory, a temporary haven for files. What purpose does /tmp serve, and how can you effectively manage temporary files in this space?</p>
<p><strong>/dev: Device Directory — Managing Devices in Linux</strong></p>
<p>Delve into the /dev directory, where devices are managed. What is the role of /dev in Linux, and how are special device files structured within this directory?</p>
<p><strong>/proc: Process Information Wonderland — Navigating the /proc Directory</strong></p>
<p>Embark on a journey into the /proc directory, a Wonderland of process information. What insights can you gain by exploring /proc, and how does it contribute to monitoring your system?</p>
<p><strong>/mnt and /media: Mounting Points — Connecting External Devices</strong></p>
<p>Discover the purpose of /mnt and /media directories, designed for mounting external devices. How can you use these directories to seamlessly connect and manage external storage?</p>
<p><strong>/srv: Service Data Directory — Organizing Service Data in Linux</strong></p>
<p>Uncover the significance of the /srv directory, dedicated to organizing service data. How does /srv contribute to a well-structured Linux environment, especially in managing data for various services?</p>
<p><strong>Conclusion: Navigating the Linux Landscape with Confidence</strong></p>
<p>As we conclude our exploration of the Linux Directory Structure, you now possess a roadmap to navigate the intricate terrain of your Linux system. Each directory plays a unique role, contributing to the harmony of your operating system. Armed with this knowledge, you can enhance your Linux navigation skills and confidently traverse the Linux landscape.</p>
<p>FAQ Section: Decoding Linux Directory Structure</p>
<ol>
<li><p><em>Why is the root directory (/) crucial in Linux?</em></p>
</li>
<li><p>The / directory is the foundation of the Linux file system, containing essential system files and directories. It serves as the starting point for the entire directory structure.</p>
</li>
</ol>
<p><em>2. What is the significance of the /etc directory in Linux?</em></p>
<ul>
<li>The /etc directory is a haven for configuration files in Linux. It houses settings and configurations for various system and application components, allowing users to customize their Linux environment.</li>
</ul>
<p><em>3. How does the /home directory contribute to user experience in Linux?</em></p>
<ul>
<li>The /home directory is where users home directories are located. It provides users with a personalized space to store their files and configurations, enhancing their overall experience on the Linux system.</li>
</ul>
<p><em>4. What role does the /var directory play in Linux?</em></p>
<ul>
<li>The /var directory is a dynamic hub for variable data in Linux. It stores files that are expected to grow in size or change frequently, such as logs, spool files, and temporary data.</li>
</ul>
<p><em>5. Why are shared libraries stored in the /lib and /lib64 directories?</em></p>
<ul>
<li>The /lib and /lib64 directories house shared libraries in Linux, which are essential for the functioning of applications. These directories ensure that multiple programs can use the same library, optimizing system resources.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706090494381/c2edb5c0-ead0-4588-b46c-02a9052d4880.gif" alt /></p>
<hr />
<p><a target="_blank" href="https://infiq.ravikyada.in/understanding-linux-directory-structure-dd31c042d02d">Understanding Linux Directory Structure.</a> was originally published in <a target="_blank" href="https://infiq.ravikyada.in">InfiQ Technologies</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>
]]></content:encoded></item><item><title><![CDATA[Unlocking Terraform's Potential: Insights from the Hashicorp User Group Gandhinagar Event]]></title><description><![CDATA[Recently, On the 2nd Weekend of Dec. all the Techiest Gathered at the Hashicorp User Group Gandhinagar Event, there was a deep dive into the Lots of of tools available to enhance the Terraform experience in the IAC World.
with Awesome Mentoring and A...]]></description><link>https://hashnode.ravikyada.in/explore-day-guh-gandhinagar</link><guid isPermaLink="true">https://hashnode.ravikyada.in/explore-day-guh-gandhinagar</guid><category><![CDATA[Terraform]]></category><category><![CDATA[hashicorp]]></category><dc:creator><![CDATA[Ravi Kyada]]></dc:creator><pubDate>Sat, 16 Dec 2023 05:01:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1704190352803/f40acaba-0e5d-46a1-b26c-8886f2664874.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Recently, On the 2nd Weekend of Dec. all the Techiest Gathered at the Hashicorp User Group Gandhinagar Event, there was a deep dive into the Lots of of tools available to enhance the Terraform experience in the IAC World.</p>
<p>with Awesome Mentoring and Anchoring of Neel Shah, we started the Meet-up Morning and then Rishang Bhavsar delivered an enlightening session on Terraform and their insights with AWS Modules.</p>
<p>With that, All of the Attendees also explored some of the tools around Hashicorp like Hashicorp Vault, Nomad, Consul, and many.</p>
<p>Rishang did a very insightful session focused on the multifaceted capabilities of Terraform, particularly in the domains of cost exploration, documentation generation, and utilization of various supplementary tools within the Terraform workflow..</p>
<p>Let's dive and explore these Awesome elements with Terraform. You can Follow this GitHub Repo to Demonstrate all tools: <a target="_blank" href="https://github.com/Rishang/demo-tf-1">Rishang's GIthub Repo</a></p>
<h2 id="heading-automating-cost-management-in-terraform">Automating Cost Management in Terraform:</h2>
<p>Controlling costs is a significant concern for any cloud infrastructure deployment. Leveraging tools like Terraform Cost Estimation allows you to estimate infrastructure costs before deployment, enabling better decision-making.</p>
<p>Integrate Infracost tools into your Terraform pipeline to forecast expenses and optimize resource allocation, ensuring cost-effectiveness in your infrastructure provisioning.</p>
<p>Infracost shows cloud cost estimates for Terraform. It lets engineers see a cost breakdown and understand costs before making changes, either in the terminal, VS Code or pull requests.</p>
<p><mark>This Project is 100% open source and Available to Integrate with your Pipelines.<br /></mark><a target="_blank" href="https://github.com/infracost/infracost"><mark>https://github.com/infracost/infracost</mark></a></p>
<h2 id="heading-automate-tf-mudules-documentry-with-terraform-docs"><strong>Automate TF Mudules Documentry with</strong> terraform-docs:</h2>
<p>Documentation is the backbone of any project. Generating comprehensive documentation for Terraform code can be time-consuming.</p>
<p>Tools like Terraform-docs automate this process by extracting metadata from your code to generate documentation in various formats including Markdown, JSON, and YAML.</p>
<p>By incorporating Terraform docs into your workflow, you can maintain up-to-date documentation effortlessly, enhancing collaboration and understanding among team members.</p>
<p><mark>Here you can find the Github repo for the terraform-docs: </mark> <a target="_blank" href="https://github.com/terraform-docs/terraform-docs/"><mark>https://github.com/terraform-docs/terraform-docs/</mark></a></p>
<h2 id="heading-exploration-of-infra-security-tools"><strong>Exploration of Infra Security Tools:</strong></h2>
<p>Rishang discussed tfsec, a tool for automating security scanning, highlighting its ability to identify vulnerabilities and compliance violations within Terraform code.</p>
<p>Additionally, he showcased Terraformer, a visualization tool that aids in generating graphical representations of infrastructure, enabling better comprehension and troubleshooting.</p>
<p>tfsec gives awesome information about vulnerabilities around the terraform stack, some of the details we can Ignore as we Ignore the Terms &amp; Conditions of any News Letters.</p>
<p>We can also create custom checks same as custom test cases in tfsec, which depends on the project requirements.</p>
<p><mark>This project is also Open-Source and available on GitHub: </mark> <a target="_blank" href="https://github.com/aquasecurity/tfsec"><mark>https://github.com/aquasecurity/tfsec</mark></a></p>
<p>Throughout the session, Rishang emphasized best practices for leveraging these tools effectively within Terraform workflows. \</p>
<p>He provided real-world examples and case studies, illustrating how these tools can be seamlessly integrated into CI/CD pipelines, enhancing security, reducing manual efforts, and optimizing infrastructure costs.</p>
<p>In the end, we tried to explore more and more tools that can be integrated to get the estimated code of the terraform stacks.</p>
<p>Here I Mention a Few Tools Discussed Over the Meetup:</p>
<h3 id="heading-hieven-terraform-visual">Hieven Terraform Visual:</h3>
<p>It Gives you a Visual understanding of the terraform code with an improved graphical view.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1704372303632/907c1c62-f8e3-4bf3-b240-5bca33a40b7b.png" alt class="image--center mx-auto" /></p>
<p>You can check more details here: <a target="_blank" href="https://hieven.github.io/">https://hieven.github.io/</a></p>
<h3 id="heading-snyktflint">SnykTflint:</h3>
<p>SnykTFLint is the Security tool that gives us the security and vulnerabilities for your terraform modules.</p>
<p>you can also create your ruleset for terraform-provider. Check more info here: <a target="_blank" href="https://snyk.io/advisor/golang/github.com/terraform-linters/tflint-ruleset-aws#package-footer">SnykTflint</a></p>
<h3 id="heading-terragrunt">Terragrunt:</h3>
<p>- Pike</p>
<p>- Inframap</p>
<p>- Opencost</p>
<p>- Diagramcodes</p>
<h2 id="heading-conclusion">Conclusion:</h2>
<p>Remember, the key to successful infrastructure management lies in embracing automation tools that empower you to optimize costs, maintain thorough documentation, ensure robust security practices, and visualize your architecture effectively.</p>
<p>Stay updated with the evolving landscape of Terraform and Hashicorp tools to continually refine and enhance your infrastructure deployment processes.</p>
<p>The Hashicorp User Group Gandhinagar Event was an insightful journey into the world of tools that complement Terraform and enable a more efficient and robust infrastructure management experience.</p>
<p>Incorporating these automation tools into your workflow will undoubtedly elevate your Terraform deployment to greater heights.</p>
<p>Keep exploring, automating, and innovating for a more seamless infrastructure provisioning experience with Terraform!</p>
]]></content:encoded></item><item><title><![CDATA[Intro to OpenTofu: The Terraform Alternative by CNCF]]></title><description><![CDATA[Are you working on cloud infrastructure management and the IAC Team, then you heard the name Terraform - The powerful tool that Supports Lots of Providers to deploy the Infra.  
But as looking for a powerful Infrastructure as Code (IAC) tool that sim...]]></description><link>https://hashnode.ravikyada.in/opentofu-the-terraform-alternative</link><guid isPermaLink="true">https://hashnode.ravikyada.in/opentofu-the-terraform-alternative</guid><category><![CDATA[Terraform]]></category><category><![CDATA[CNCF]]></category><category><![CDATA[opensource]]></category><category><![CDATA[#IaC]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Ravi Kyada]]></dc:creator><pubDate>Sun, 24 Sep 2023 11:34:30 GMT</pubDate><content:encoded><![CDATA[<p>Are you working on cloud infrastructure management and the IAC Team, then you heard the name Terraform - The powerful tool that Supports Lots of Providers to deploy the Infra.  </p>
<p>But as looking for a powerful Infrastructure as Code (IAC) tool that simplifies your provisioning and deployment process? OpenTofu is an open-source project supported by the Cloud Native Computing Foundation (CNCF).</p>
<p>On the date of <strong>20th Sept 2023, CNCF Announced that OpenTF is Now OpenTofu. and it will be Part of the CNCF. That was a huge winning for the Open-Source communities that are working with Infrastructure as a code Industry.</strong>  </p>
<p>In this blog post, we’ll introduce you to OpenTofu, highlighting its features, benefits, and functionality as Terraform’s new attractive alternative.</p>
<h2 id="heading-what-is-opentofu">What is OpenTofu?</h2>
<p>OpenTofu is an Infrastructure as Code (IAC) tool designed to automate the provisioning and management of cloud resources, containers, and infrastructure components.</p>
<p>It’s an open-source project hosted under CNCF since September 2023, which speaks volumes about its beliefs and community support.</p>
<p>OpenTofu is published under the Mozilla Public License (MPL). Publishing software under MPL means that it is open source, can be freely used, modified, and distributed.</p>
<h2 id="heading-controversy-of-terraform-in-opensource">Controversy of terraform in OpenSource:</h2>
<p>It has long been time for Terraform to be a Core component in simplifying the management of infrastructure in cloud environments. DevOps teams around the world use Terraform as the go-to tool due to its ease of use and powerful capabilities.</p>
<p>Recently, HashiCorp, the company behind Terraform, introduced significant license changes that have raised significant concerns within the open-source community.</p>
<p>Terraform Licences after 1.5 Versions are moving from Mozilla Public License v2.0 (MPLv2) to Business Source License v1.1.  </p>
<p>which means you can have a Source code of Terraform, only if your use falls outside of the additional use, especially for commercial purposes you should have to commit a special license grant based on an individual agreement with HashiCorp.</p>
<p>Security patches for the MPL version will be provided until the end of December 2023.</p>
<h3 id="heading-key-features-of-opentofu"><strong>Key Features of OpenTofu</strong></h3>
<p>OpenTofu gives a variety of features choice for IAC over Terraform :</p>
<ol>
<li><p><strong>Declarative Syntax</strong>: OpenTofu uses a declarative approach, making it easy to define the desired state of your infrastructure. You describe what you want, and OpenTofu figures out how to make it happen.</p>
</li>
<li><p><strong>Multi-Cloud Support</strong>: It supports multiple cloud providers, allowing you to manage resources across platforms like AWS, Google Cloud, and Azure, all in one place.</p>
</li>
<li><p><strong>Extensibility</strong>: OpenTofu is designed to be highly extensible. You can create custom modules, plugins, and extensions to tailor it to your specific needs.</p>
</li>
<li><p><strong>Version Control</strong>: Just like code, infrastructure configurations can benefit from version control. OpenTofu integrates seamlessly with Git, making it easy to track changes and collaborate with team members.</p>
</li>
<li><p><strong>Community-Driven</strong>: Being part of the CNCF ecosystem means OpenTofu has a thriving community of users and contributors. You can find extensive documentation, tutorials, and support online.</p>
</li>
</ol>
<h3 id="heading-why-choose-opentofu-over-terraform"><strong>Why Choose OpenTofu Over Terraform?</strong></h3>
<p>While Terraform is a popular choice for IAC, OpenTofu offers several advantages:</p>
<ol>
<li><p><strong>Multi-Cloud Natively</strong>: OpenTofu is designed with multi-cloud support in mind. While Terraform supports multiple cloud providers, OpenTofu's native multi-cloud capabilities simplify the management of complex, multi-cloud infrastructures.</p>
</li>
<li><p><strong>Simplicity &amp; Integration</strong>: OpenTofu's declarative syntax is often considered more straightforward and easier to read compared to Terraform's HCL (HashiCorp Configuration Language).</p>
</li>
<li><p><strong>Extensibility</strong>: OpenTofu's architecture allows for greater extensibility through plugins and modules, providing flexibility for specific use cases.</p>
</li>
<li><p><strong>Community Momentum</strong>: OpenTofu is gaining traction in the IAC space, with a growing community, continuous development, and an exciting roadmap.</p>
</li>
</ol>
<h3 id="heading-getting-started-with-opentofu"><strong>Getting Started with OpenTofu</strong></h3>
<p>Getting started with OpenTofu is a breeze. You can follow these simple steps:</p>
<ol>
<li><p><strong>Install OpenTofu</strong>: Begin by installing OpenTofu on your system. You can find installation instructions in the official documentation.</p>
</li>
<li><p><strong>Write Configurations</strong>: Define your infrastructure and cloud resources using OpenTofu's declarative syntax.</p>
</li>
<li><p><strong>Apply Configurations</strong>: Run OpenTofu commands to apply your configurations and provision resources.</p>
</li>
<li><p><strong>Version Control</strong>: Integrate OpenTofu with your version control system (e.g., Git) to manage configurations effectively.</p>
</li>
<li><p><strong>Explore the Community</strong>: Dive into the OpenTofu community, where you'll find tutorials, forums, and resources to help you master the tool.</p>
</li>
</ol>
<h3 id="heading-conclusion"><strong>Conclusion</strong></h3>
<p>OpenTofu, the CNCF-backed Terraform alternative, brings simplicity, multi-cloud support, and a thriving community to the world of Infrastructure as Code. Whether you're a seasoned IAC professional or just getting started, OpenTofu is worth exploring for its promising features and the CNCF stamp of approval. Stay tuned for more in-depth tutorials and insights into harnessing the full potential of OpenTofu in future blog posts.</p>
<p>Are you ready to dive into the world of OpenTofu?</p>
<p>Let us know your thoughts and experiences in the comments below!</p>
]]></content:encoded></item><item><title><![CDATA[AWS Reserved Instances: Ultimate Guide to Choose the Right Reserved Instances for You.]]></title><description><![CDATA[In the sector of cloud computing and Server deployment, Amazon Web Services (AWS) stands as the Major Provider providing a full-size array of services to satisfy the diverse needs of corporations.
When it involves optimizing fees and maximizing effic...]]></description><link>https://hashnode.ravikyada.in/aws-reserved-instances-guide</link><guid isPermaLink="true">https://hashnode.ravikyada.in/aws-reserved-instances-guide</guid><category><![CDATA[AWS]]></category><category><![CDATA[ec2]]></category><category><![CDATA[#AWSPricing #CloudCosts #AWSBudgeting #CostOptimization #PricingOptions #AWSBilling #SavingsPlans #ReservedInstances #SpotInstances #CloudFinance ]]></category><dc:creator><![CDATA[Ravi Kyada]]></dc:creator><pubDate>Thu, 06 Jul 2023 11:51:01 GMT</pubDate><content:encoded><![CDATA[<p>In the sector of cloud computing and Server deployment, Amazon Web Services (AWS) stands as the Major Provider providing a full-size array of services to satisfy the diverse needs of corporations.</p>
<p>When it involves optimizing fees and maximizing efficiency at the Amazon Web Services (AWS) cloud platform, Reserved Instances (RIs) are a sport-changer.</p>
<p>To optimize price efficiency and beautify overall performance, AWS gives Reserved Instances (RIs), a pricing version that permits clients to order EC2 instances for a particular duration.</p>
<p>By committing to a positive quantity of usage over a term, you could notably reduce your EC2 instance charges.</p>
<p>In this complete manual, we can explore AWS Reserved Instances and provide precious insights to help you select the right Reserved Instances in your commercial enterprise needs.</p>
<p>So, let's dive into the Points to Understand AWS Reserved Instances!</p>
<h2 id="heading-understanding-aws-reserved-instances">Understanding AWS Reserved Instances:</h2>
<p>AWS Reserved Instances provide users with the possibility to reserve EC2 times in advance, offering huge savings in comparison to On-Demand instances.</p>
<p>By creating a commitment to use unique instance sorts in precise Availability Zones over a designated period, you could liberate extensive cost advantages.</p>
<p>RIs are to be had in 3 distinctive alternatives: Standard, Convertible, and Scheduled. Each option offers particular blessings and versatility, catering to distinct workload requirements.</p>
<h2 id="heading-analyzing-your-workloads">Analyzing Your Workloads:</h2>
<p>Before diving into Reserved Instances, it is important to analyze your workloads and emerge as aware of utilization.</p>
<p>AWS affords diverse tools, consisting of AWS Cost Explorer and Trusted Advisor, to help you understand your utilization patterns, usage fees, and potential rate monetary financial savings.</p>
<p>By having clean information on your workloads, you can make informed alternatives concerning the most RI sorts, phrases, and price alternatives.</p>
<h2 id="heading-utilizing-aws-cost-optimization-tools">Utilizing AWS Cost Optimization Tools:</h2>
<p>To optimize your RI usage and ensure you're making the most of your investments, AWS offers various cost optimization tools and services.</p>
<p>Services like AWS Trusted Advisor and AWS Cost Explorer can provide valuable insights into your RI utilization and recommend optimization strategies.</p>
<p>By leveraging these tools, you can continuously monitor and adjust your RI portfolio, further driving down costs and improving performance.</p>
<h2 id="heading-choosing-the-right-reserved-instances">Choosing the Right Reserved Instances:</h2>
<p>So while we are choosing the Reserved Instances, there are many scenarios and options we need to check.</p>
<h3 id="heading-a-standard-ris">A). Standard RIs:</h3>
<p>Standard RIs offer the highest discount rate compared to other options. They provide a stable and predictable workload, making them suitable for applications with steady traffic and consistent resource requirements.</p>
<h3 id="heading-b-convertible-ris">B). Convertible RIs:</h3>
<p>Convertible RIs offer more flexibility than Standard RIs. They allow you to modify the instance attributes over time, making them ideal for workloads that may change in the future or for businesses that require frequent instance modifications.</p>
<h3 id="heading-c-scheduled-ris">C). Scheduled RIs:</h3>
<p>Scheduled RIs cater to workloads with predictable recurring schedules, such as batch jobs or seasonal traffic spikes. With Scheduled RIs, you can reserve capacity for specific time slots, ensuring availability during peak hours.</p>
<h2 id="heading-factors-to-consider-when-choosing-reserved-instances">Factors to Consider When Choosing Reserved Instances:</h2>
<h3 id="heading-a-usage-patterns-and-workload-predictability">A) Usage Patterns and Workload Predictability:</h3>
<p>Analyzing your workload patterns and predicting your long-term usage is crucial for selecting the right RIs. If your workload has consistent usage, consider purchasing Standard RIs. For workloads with uncertain or evolving requirements, Convertible RIs provide more flexibility.</p>
<h3 id="heading-b-instance-size-and-region">B) Instance Size and Region:</h3>
<p>Identify the instance size and AWS region that align with your workload demands. AWS allows you to reserve instances across various instance types and sizes, so choose wisely to optimize your savings.</p>
<h3 id="heading-c-payment-options">C) Payment Options:</h3>
<p>RIs can be paid for upfront, partially upfront, or with no upfront payment. Evaluating your budget and cash flow requirements will help determine the most suitable payment option for your organization.</p>
<h2 id="heading-conclusion">Conclusion:</h2>
<p>AWS Reserved Instances present an excellent opportunity for organizations to optimize their EC2 instance costs while maintaining flexibility.</p>
<p>By understanding the different types of RIs, analyzing usage patterns, and implementing effective planning strategies, you can achieve significant cost savings on the AWS cloud platform.</p>
<p>Remember to regularly monitor and adjust your reservations to align with your evolving workload requirements.</p>
<p>Start leveraging the power of AWS Reserved Instances today and unlock the full potential of cost optimization in the cloud.</p>
]]></content:encoded></item><item><title><![CDATA[Nginx in Docker with Lets Encrypt SSL: Configure Nginx and SSL with Docker Compose]]></title><description><![CDATA[As a DevOps enthusiast, I'm always on the lookout for ways to enhance the security and performance of web applications.
NGINX - a powerful essential tool, an open-source web server that can also work as a reverse proxy, load balancer, and HTTP cache....]]></description><link>https://hashnode.ravikyada.in/lets-encrypt-ssl-configure-nginx-ssl-docker-compose</link><guid isPermaLink="true">https://hashnode.ravikyada.in/lets-encrypt-ssl-configure-nginx-ssl-docker-compose</guid><category><![CDATA[2Articles1Week]]></category><dc:creator><![CDATA[Ravi Kyada]]></dc:creator><pubDate>Sat, 22 Apr 2023 09:40:44 GMT</pubDate><content:encoded><![CDATA[<p>As a DevOps enthusiast, I'm always on the lookout for ways to enhance the security and performance of web applications.</p>
<p>NGINX - a powerful essential tool, an open-source web server that can also work as a reverse proxy, load balancer, and HTTP cache.</p>
<p>In this blog post, I'll be sharing how to set up NGINX with a self-signed SSL/TLS certificate on Docker, so you can ensure your web apps are safe with HTTPS.</p>
<p>Self-signed certificates are an excellent tool for testing and development purposes, but should not be the best to use in production environments.</p>
<p>Instead, you should use a trusted SSL/TLS certificate issued by a Certificate Authority (CA) to ensure the security of your website.</p>
<h3 id="heading-prerequisites">Prerequisites:</h3>
<p>Before we begin, make sure you have the following installed on your system:</p>
<ul>
<li><p>Docker</p>
</li>
<li><p>Docker Compose</p>
</li>
<li><p>nginx conf and SSL Setup.</p>
</li>
</ul>
<h3 id="heading-what-is-docker-compose">What is Docker-Compose:</h3>
<p>Docker-compose is a tool docker utility that simplifies the deployment and management of multiple containers in a single application.</p>
<p>It allows you to define the configuration of each container in a YAML file, automating the creation, startup, and shutdown of containers. With docker-compose, you can easily run multiple containers and their dependencies together.</p>
<h2 id="heading-lets-configure-nginx-conf-file">Let's Configure Nginx Conf File:</h2>
<p>you can use any Simple nginx.conf file with reverse proxy to the 443 Port. For that you should have basic knowledge of nginx.</p>
<p>Basically while configuring nginx with docker you should manually add SSL Configurations in the nginx.conf file.</p>
<p>Here is One Nginx.conf file that you can use for demonstrations:</p>
<div class="gist-block embed-wrapper" data-gist-show-loading="false" data-id="dde9be0cc3549be5d4e398d670687cfd"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a href="https://gist.github.com/ravikyada/dde9be0cc3549be5d4e398d670687cfd" class="embed-card">https://gist.github.com/ravikyada/dde9be0cc3549be5d4e398d670687cfd</a></div><p> </p>
<p>This Nginx configuration file sets up a web server for the domain test.demo.in. It listens on both HTTP and HTTPS ports, which are ports 80 and 443 respectively.</p>
<p>The ssl_certificate and ssl_certificate_key directives specify the SSL/TLS certificates and keys for the domain name test.demo.in. This is because uses a Certbot to obtain and manage the SSL certificates.</p>
<p>The final location /.well-known/acme-challenge/ block specifies the root directory where Certbot can store challenge-response files during the certificate issuance process. It is necessary to verify the SSL Certification in Containers.</p>
<p>Overall, this configuration file sets up a basic HTTPS server with SSL certificates obtained using Certbot, which proxies all requests to another web server.</p>
<h2 id="heading-setting-up-docker-composeyaml">Setting Up docker-compose.yaml:</h2>
<p>Let's create basic <code>docker-compose.yml</code> file that defines nginx and certbot containers for our Requirements.</p>
<div class="gist-block embed-wrapper" data-gist-show-loading="false" data-id="43a694a1897e79c61562358ab0936910"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a href="https://gist.github.com/ravikyada/43a694a1897e79c61562358ab0936910" class="embed-card">https://gist.github.com/ravikyada/43a694a1897e79c61562358ab0936910</a></div><p> </p>
<p>Before Applying this docker file edit the certbot commands at the Last Lines of the File, you need to change the mail id and domain name in that command.</p>
<p>Please Edit this Command before Applying the docker-compose file.</p>
<pre><code class="lang-plaintext">certonly --webroot --webroot-path=/var/www/certbot --email test.demo@gmail.com --agree-tos -d test.demo.in -d www.test.demo.in
</code></pre>
<p>The Last One Entrypoint we used to automatically renew our SSL Certificate from our certbot container.</p>
<pre><code class="lang-plaintext">entrypoint: /bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h &amp; wait $${!}; done;'
</code></pre>
<p>here we attached 3 volumes for Our nginx container. It's because we need to give nginx.conf and SSL certificates from Our Local System.</p>
<h3 id="heading-workarounds-to-run-container-successfully">WorkArounds to Run Container Successfully:</h3>
<p>After Doing All Such things here is one WorkAround you need to do Because Before Submitting Certificate files to Nginx it won't run.</p>
<p>So to Run the Nginx Server Successfully we need to Create a Certificate before Getting started with the docker-compose.</p>
<p>This is because Let's Encrypt needs to perform an ACME Challenge Request to verify your domain ownership and issue a certificate.</p>
<p>Here is One Shell Script we have which will do all the work for us: <a target="_blank" href="https://gist.githubusercontent.com/RAVIKYADA/5a8fe47047f3f543fc7c9eb34cc07ced/raw/df1ee5838b1a62b2bc7e540a6f80b1562e8f377c/lets-encrypt-init.sh">LetsEncrypt-init.ssh</a></p>
<div class="gist-block embed-wrapper" data-gist-show-loading="false" data-id="f25f904a842027003cca2c2256e60624"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a href="https://gist.github.com/ravikyada/f25f904a842027003cca2c2256e60624" class="embed-card">https://gist.github.com/ravikyada/f25f904a842027003cca2c2256e60624</a></div><p> </p>
<p>Please change your domain name and Preferable script Before Getting Started with the Script.</p>
<h3 id="heading-how-certbot-ssl-works-in-docker-container">How certbot SSL Works in Docker Container:</h3>
<p>Certbot is a free, open-source tool that simplifies the process of obtaining and renewing SSL/TLS certificates for your website.</p>
<p>Before running the Certbot command to install a new SSL/TLS certificate, it's necessary to set up a basic instance of Nginx to make your domain accessible over HTTP.</p>
<p>The web root directory is mounted as a volume in the Docker Compose file, so Certbot can write files to the directory and the NGINX service can serve those files to Let's Encrypt for verification.</p>
<p>Once the Certbot service has generated the SSL/TLS certificate, it will be saved in the Certbot configuration directory, which is mounted as a volume in the Docker Compose file.</p>
<p>In summary, Certbot works by interacting with Let's Encrypt to generate SSL/TLS certificates for your website, and the Docker Compose file and their volume sets up a complete system for using Certbot to secure your website with HTTPS.</p>
]]></content:encoded></item><item><title><![CDATA[Build Docker Images with Kaniko Inside Jenkins Deployed On Kubernetes]]></title><description><![CDATA[Building Docker images in a Local system with a simple Dockerfile is the Easiest Task for those who work with Docker and Kubernetes.
But the tricky thing is when we need to work on building Docker images inside Docker Containers. That's Nothing but D...]]></description><link>https://hashnode.ravikyada.in/kaniko-dind-without-docker-sock-kubernetes</link><guid isPermaLink="true">https://hashnode.ravikyada.in/kaniko-dind-without-docker-sock-kubernetes</guid><category><![CDATA[Kaniko]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Jenkins]]></category><dc:creator><![CDATA[Ravi Kyada]]></dc:creator><pubDate>Thu, 20 Apr 2023 16:49:25 GMT</pubDate><content:encoded><![CDATA[<p>Building Docker images in a Local system with a simple Dockerfile is the Easiest Task for those who work with Docker and Kubernetes.</p>
<p>But the tricky thing is when we need to work on building Docker images inside Docker Containers. That's Nothing but DockerinDocker.</p>
<p>Most of We Are Mounting /var/run/docker.sock into the Base Image and Building the Docker image, But For Some Reason, We Are facing This error While Building Docker Images Inside Kubernetes pods:</p>
<p>Kaniko is a tool that enables building Docker images in a container without needing to run a Docker daemon. This makes it ideal for building images in environments where Docker is not installed or for building images inside a container.</p>
<p>Kaniko runs in a Docker container and has the single purpose of building and pushing a Docker image. This design means it’s easy for us to spin one up from within a Jenkins pipeline, running as many as we need.</p>
<p>Kaniko runs as a container and takes in three arguments: a Dockerfile, a build context, and the name of the registry to which it should push the final image.</p>
<h2 id="heading-what-is-docker-daemon">What is Docker Daemon:</h2>
<p>Docker daemon is A persistent background process that manages Docker images, containers, networks, and storage volumes. The Docker daemon constantly listens for Docker API requests and processes them.</p>
<p>One thing about Docker is that we need to have root access to Interact with Docker Commands. Obviously, there is Risk while we worked with root access for any applications.</p>
<h2 id="heading-kaniko-the-best-solution-for-dind-in-k8s-124">Kaniko- The Best Solution for DinD In K8s 1.24:</h2>
<p>Kubernetes is easy to deploy and scale containerized applications. However, Kubernetes does not include a built-in way to build Docker images.</p>
<p>Previously, the common approach was to use a Docker-in-Docker (DinD) setup or to mount the Docker socket inside the container.</p>
<p>However, these approaches have some drawbacks, such as requiring privileged containers, potential security concerns, and difficulty in managing the Docker daemon inside the container.</p>
<p>In Kubernetes 1.24, the Kubernetes project has deprecated support for using the Docker daemon directly in a container.</p>
<p>Instead, the recommended approach is to use a tool like Kaniko for building Docker images. Kaniko allows us to build Docker images without requiring a Docker daemon or mounting the Docker socket inside the container.</p>
<p>This eliminates the need for privileged containers, reduces security concerns, and makes it easier to manage the container.</p>
<h3 id="heading-lets-start-towards-creating-kaniko-pod-to-build-docker-images">Let’s start Towards Creating Kaniko Pod to Build Docker Images:</h3>
<p>Prerequisites Before Getting Started:</p>
<p>1. Kubernetes Environment with Docker Registry Secrets.</p>
<p>2. Docker Configurations File to mount a volume in /kaniko/.docker/ Path of the kaniko Container.</p>
<p>3. Docker Context with Dockerfile.</p>
<h3 id="heading-understanding-docker-arguments">Understanding Docker Arguments:</h3>
<p>Before Getting Your Hands Dirty With Kaniko, Let's Take an Overview of the Kaniko and Arguments Which make sense to work with Kaniko.</p>
<ol>
<li><p>Dockerfile: Dockerfile is the File Containing all the Steps to be Run DUring the Time of Building the Image.</p>
</li>
<li><p>Destination: Destination is the docker registry where the built image should be pushed. This means Kaniko Build and Pushes the Image in a Single Line Of Command.</p>
<p> If you just want to build and not to be pushed to the registry, then you can also use –the no-push flag. Which will just Build the image, nothing More.</p>
</li>
<li><p>Build Context: Build context is nothing more than Normal Docker Context, the Directory where Docker builds the image.</p>
</li>
</ol>
<p>Kaniko features a Bunch of Cloud Storages for docker context with --context Arguments. Supported Storages for Kaniko to Build the Docker images are :</p>
<ul>
<li><p>Git Repository</p>
</li>
<li><p>Local Directory</p>
</li>
<li><p>S3 Bucket</p>
</li>
<li><p>GCS Bucket</p>
</li>
<li><p>Azure Blob Storage</p>
</li>
<li><p>Standard Input</p>
</li>
</ul>
<h3 id="heading-creating-secrets-for-aws-cli-in-jenkins-branch">Creating Secrets for AWS CLI in Jenkins Branch.</h3>
<p>So, We Are Going to Push Our Image Built by Kaniko to the Private AWS ECR Repository we first need to Have Access to the AWS Secret and Access key with ECR Permissions.</p>
<p>Kaniko Accepts AWS and Secret Keys From Volume, Which We Mount During Pod Creation. So First Create AWS Credentials File.</p>
<h3 id="heading-sample-aws-credentials-file">Sample AWS Credentials File:</h3>
<pre><code class="lang-bash">[default]
aws_access_key_id = AKXXXXXXXXXMQ
aws_secret_access_key = HdXXXXXXXXXXXXXXXXX458
</code></pre>
<p>Create Secrets From that file with this command:</p>
<pre><code class="lang-bash">kubectl create secret generic aws-secret --from-file=&lt;path to Credentials file&gt; -n jenkins
</code></pre>
<p>Docker Manages their Services with a config.json file inside ~/.docker/ Directory. So, Let’s Create Configmap For Docker Configurations Which Will Manage the Credentials Store For AWS ECR Registry.</p>
<pre><code class="lang-bash">{
    <span class="hljs-string">"auths"</span>: {
        <span class="hljs-string">"1234567890.dkr.ecr.ap-south-1.amazonaws.com"</span>: {},
        <span class="hljs-string">"https://index.docker.io/v1/"</span>: {}
    },
    <span class="hljs-string">"credsStore"</span>: <span class="hljs-string">"ecr-login"</span>

}
</code></pre>
<p>Use this Command to Create Configmap in jenkins NameSpace:</p>
<pre><code class="lang-bash">kubectl create configmap docker-config --from-file=&lt;path to docker config.json file&gt; -n jenkins
</code></pre>
<p>Now We Are Good to Go with the Docker Registry Credentials For AWS. For GCR or Docker registry, You may need to Do Some Changes config.json Of Docker.</p>
<p>Normally In Mac and Linux, the config.json is Stored in cat ~/.docker/config.json Path.</p>
<h3 id="heading-going-forward-create-jenkinsfile-with-podtemplate-of-kaniko">Going Forward Create Jenkinsfile with PodTemplate of Kaniko.</h3>
<p>Set up Jenkinsfile With Kaniko Executor Image and Mount Volumes with AWS CLI Secret And Docker Registry config maps.</p>
<p>Here is the Pod.yaml That you should use to Create Pod inside Jenkinsfile.</p>
<div class="gist-block embed-wrapper" data-gist-show-loading="false" data-id="6f76bdbfdc192e5f29839e2a7dbc869b"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a href="https://gist.github.com/ravikyada/6f76bdbfdc192e5f29839e2a7dbc869b" class="embed-card">https://gist.github.com/ravikyada/6f76bdbfdc192e5f29839e2a7dbc869b</a></div><p> </p>
<p>Create Jenkins Job Stage with Environment Variable PATH = "/busybox:/kaniko:$PATH". The Reason behind this variable is to Help Kaniko to  Pick their Context from the Current Directory inside the Pod Container.</p>
<p>Now Here is the Final Jenkinsfile:</p>
<div class="gist-block embed-wrapper" data-gist-show-loading="false" data-id="2fd9c74ea03e43f27a4b8dd7d4074888"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a href="https://gist.github.com/ravikyada/2fd9c74ea03e43f27a4b8dd7d4074888" class="embed-card">https://gist.github.com/ravikyada/2fd9c74ea03e43f27a4b8dd7d4074888</a></div><p> </p>
<hr />
<p>Thank you so much for Reading the Article till the End! 🙌🏻 Your time and interest truly mean a lot 😁📃.</p>
<p>If you have any questions or thoughts about this blog, feel free to connect with me:</p>
<p>LinkedIn: <a target="_blank" href="https://www.linkedin.com/in/ravikyada">https://www.linkedin.com/in/ravikyada</a></p>
<p>Twitter: <a target="_blank" href="https://twitter.com/ravijkyada">https://twitter.com/ravijkyada</a></p>
<p>Until next time, Cheers to more learning and discovery✌🏻!</p>
]]></content:encoded></item><item><title><![CDATA[Getting Started with AWS Lambda: A Basic Guide and Key Features for Beginners]]></title><description><![CDATA[In today’s digital age, there is a growing demand for software applications that can scale quickly, be deployed easily, and cost-effectively. This is where serverless computing comes in, and AWS Lambda is a popular solution for implementing the serve...]]></description><link>https://hashnode.ravikyada.in/getting-started-aws-lambda-basic-guide-key-features-beginners</link><guid isPermaLink="true">https://hashnode.ravikyada.in/getting-started-aws-lambda-basic-guide-key-features-beginners</guid><category><![CDATA[AWS]]></category><category><![CDATA[aws lambda]]></category><category><![CDATA[serverless]]></category><dc:creator><![CDATA[Ravi Kyada]]></dc:creator><pubDate>Sun, 16 Apr 2023 05:16:24 GMT</pubDate><content:encoded><![CDATA[<p>In today’s digital age, there is a growing demand for software applications that can scale quickly, be deployed easily, and cost-effectively. This is where serverless computing comes in, and AWS Lambda is a popular solution for implementing the serverless architecture.</p>
<p>In this article, we will discuss AWS Lambda and its key features, and how it can benefit businesses of all sizes.</p>
<h1 id="heading-what-is-aws-lambda"><strong>What is AWS Lambda?</strong></h1>
<p>AWS Lambda is a serverless computing service provided by Amazon Web Services (AWS) that allows users to run code in response to specific events or triggers, without the need to manage servers.</p>
<p>It is a platform that enables developers to build and run applications without worrying about the underlying infrastructure. AWS Lambda is designed to handle and scale automatically based on the traffic and workload it receives, making it a cost-effective and efficient solution for businesses of all sizes.</p>
<h1 id="heading-how-does-aws-lambda-work"><strong>How does AWS Lambda work?</strong></h1>
<p>AWS Lambda works by running code in response to events or triggers such as HTTP requests, changes to data in an AWS S3 bucket, or changes in a database.</p>
<p>Developers can write code in languages like Node.js, Python, Java, C#, and Go and upload it to AWS Lambda. Once the code is uploaded, it is ready to be executed in response to the specified events or triggers.</p>
<p>AWS Lambda automatically manages the underlying infrastructure, including the scaling of the resources needed to run the code. This makes it easy to develop and deploy applications without having to worry about managing servers.</p>
<h1 id="heading-key-features-of-aws-lambda"><strong>Key Features of AWS Lambda:</strong></h1>
<ol>
<li><p>Scalability: AWS Lambda can handle large workloads and automatically scale resources to meet demand. It can support thousands of concurrent requests and can scale up or down based on the traffic.</p>
</li>
<li><p>Cost-Effective: AWS Lambda pricing is based on the number of requests and the duration of execution time, making it a cost-effective solution. This means that businesses only pay for the time their code is executed, and there is no need to pay for idle time or unused resources.</p>
</li>
<li><p>Easy to Use: AWS Lambda is easy to set up, and there is no need to manage or provision servers. This makes it easy to develop and deploy applications quickly.</p>
</li>
<li><p>Integrations: AWS Lambda integrates with other AWS services such as Amazon S3, Amazon DynamoDB, and Amazon API Gateway, making it easy to build complex applications.</p>
</li>
<li><p>Security: AWS Lambda is designed with security in mind and provides multiple layers of security, including access control and encryption. This ensures that your code and data are protected at all times.</p>
</li>
</ol>
<h1 id="heading-conclusion"><strong>Conclusion:</strong></h1>
<p>AWS Lambda is an excellent solution for businesses that want to implement serverless computing. It provides an easy-to-use platform that enables developers to build and run applications without worrying about the underlying infrastructure.</p>
<p>With its scalability, cost-effectiveness, and integrations with other AWS services, AWS Lambda is a powerful tool that can help businesses of all sizes improve their efficiency and reduce costs. We hope that this beginner’s guide has provided you with a better understanding of AWS Lambda and its key features.</p>
]]></content:encoded></item><item><title><![CDATA[Basics of Docker networking: Configuring communication between containers]]></title><description><![CDATA[Docker is an open-source platform that enables developers to build, ship, and run applications in containers.
One of the key advantages of containerization is the ability to easily configure communication between containers, allowing them to work tog...]]></description><link>https://hashnode.ravikyada.in/basic-docker-networking-configuring-communication-containers</link><guid isPermaLink="true">https://hashnode.ravikyada.in/basic-docker-networking-configuring-communication-containers</guid><category><![CDATA[Docker]]></category><category><![CDATA[Devops]]></category><category><![CDATA[networking]]></category><category><![CDATA[containers]]></category><category><![CDATA[Security]]></category><dc:creator><![CDATA[Ravi Kyada]]></dc:creator><pubDate>Sat, 15 Apr 2023 14:42:39 GMT</pubDate><content:encoded><![CDATA[<p>Docker is an open-source platform that enables developers to build, ship, and run applications in containers.</p>
<p>One of the key advantages of containerization is the ability to easily configure communication between containers, allowing them to work together seamlessly to deliver a complete solution.</p>
<p>In this article, we'll explore the basics of Docker networking, and show you how to configure communication between containers.</p>
<h2 id="heading-what-is-docker-networking">What is Docker Networking:</h2>
<p>Docker containers are designed to be self-contained and isolated, but they often need to communicate with each other to form a complete application.</p>
<p>Docker networking refers to the process of creating a virtual network that enables the communication between containers, as well as between containers and the host machine.</p>
<p>By default, Docker creates a bridge network for each host, which provides a secure and isolated environment for container communication.</p>
<p>When you create a container, it's automatically assigned an IP address on the bridge network, which allows it to communicate with other containers and the host.</p>
<p>However, if you want to configure more complex networking scenarios, such as creating multiple networks or connecting containers to external networks, you'll need to use Docker networking commands.</p>
<p>In this article, we will discuss how to configure communication between containers using Docker networking.</p>
<h2 id="heading-types-of-docker-networks">Types of Docker Networks</h2>
<p>Docker provides several types of network drivers for creating custom networks that connect containers. Let's take a look at the different types of networks:</p>
<ol>
<li>Bridge Network</li>
</ol>
<p>The bridge network is the default network in Docker, and it provides automatic IP address assignment to containers. Containers connected to the same bridge network can communicate with each other using their IP addresses or hostnames.</p>
<p>To create a bridge network, use the following command:</p>
<pre><code class="lang-bash">docker network create my-bridge-network
</code></pre>
<p>You can connect a container to the bridge network using the <code>--network</code> option when running the container:</p>
<pre><code class="lang-bash">docker run --network my-bridge-network my-container
</code></pre>
<ol>
<li>Host Network</li>
</ol>
<p>The host network driver allows the container to use the host's networking directly. This means that the container does not have a separate network namespace and shares the host's network stack. This can improve network performance but can also cause security issues.</p>
<p>To use the host network, use the following command:</p>
<pre><code class="lang-bash">docker run --network host my-container
</code></pre>
<ol>
<li>Overlay Network</li>
</ol>
<p>The overlay network driver allows you to create a distributed network across multiple Docker hosts. It uses the VXLAN protocol to create an overlay network that spans multiple hosts. Containers connected to the same overlay network can communicate with each other, even if they are running on different hosts.</p>
<p>To create an overlay network, use the following command:</p>
<pre><code class="lang-bash">docker network create --driver overlay my-overlay-network
</code></pre>
<ol>
<li>Macvlan Network</li>
</ol>
<p>The Macvlan network driver allows you to assign a MAC address to a container, which makes it appear as a physical device on the network. This can be useful for legacy applications that require direct access to the network hardware.</p>
<p>To create a Macvlan network, use the following command:</p>
<pre><code class="lang-bash">docker network create -d macvlan --subnet=192.168.1.0/24 --gateway=192.168.1.1 -o parent=eth0 my-macvlan-network
</code></pre>
<p>In this command, <code>--subnet</code> specifies the IP address range for the network, <code>--gateway</code> specifies the default gateway, and <code>-o parent</code> specifies the physical interface to use.</p>
<h2 id="heading-creating-a-docker-network">Creating a Docker Network</h2>
<p>To create a new Docker network, you can use the docker network create command, followed by a network name and options to specify the network driver and subnet. For example, to create a new bridge network called my-network, you can run the following command:</p>
<pre><code class="lang-bash">docker network create --driver bridge --subnet 172.20.0.0/16 demo-network
</code></pre>
<p>This command creates a new bridge network with the specified subnet and assigns the name demo-network to the network. You can then use this network to connect containers to each other and to external networks.</p>
<h2 id="heading-connecting-containers-to-a-network">Connecting Containers to a Network</h2>
<p>Once you've created a network, you can connect containers to it using the --network option when you run the container.</p>
<p>For example, to start a new container and connect it to the my-network bridge network, you can run the following command:</p>
<pre><code class="lang-bash">docker run --name my-container --network my-network my-image
</code></pre>
<p>This command starts a new container called my-container from the image demo-image and connects it to the demo-network bridge network.</p>
<p>If you want to connect an existing container to a network, you can use the docker network connect command. For example, to connect a container called demo-container to the demo-network bridge network, you can run the following command:</p>
<pre><code class="lang-bash">docker network connect my-network my-container
</code></pre>
<p>This command connects the demo-container container to the demo-network bridge network.</p>
<h2 id="heading-exposing-container-ports">Exposing Container Ports</h2>
<p>When you connect containers to a network, you can also expose the container's ports to the network, allowing other containers to access the container's services.</p>
<p>To expose a port, you can use the -p option when you run the container, followed by the container port and the host port.</p>
<p>For example, to expose port 80 on the container to port 8080 on the host, you can run the following command:</p>
<pre><code class="lang-bash">docker run -p 8080:80 demo-image
</code></pre>
<p>This command starts a new container from the my-image image, and maps port 80 in the container to port 8080 on the host.</p>
<h2 id="heading-using-docker-dns">Using Docker DNS</h2>
<p>By default, Docker provides a DNS server for containers, which allows containers to communicate with each other using hostnames instead of IP addresses. When you create a container, Docker automatically adds the container's hostname to the DNS server, which allows other containers to access the container using its hostname.</p>
<p>For example, if you have two containers called web and db connected to the same network, you can access the db container from the web container using its hostname, like this:</p>
<pre><code class="lang-bash">ping db
</code></pre>
<h3 id="heading-what-is-the-best-way-to-work-with-docker-dns">What is the Best Way to Work with Docker DNS:</h3>
<p>Docker Compose is a tool that allows you to define and run multi-container Docker applications. It simplifies the process of configuring network communication between containers by allowing you to define networks in a single file.</p>
<p>To define a network in a Docker Compose file, use the following syntax:</p>
<div class="gist-block embed-wrapper" data-gist-show-loading="false" data-id="e3b9b678cf9f10de53cff4d79aa7c960"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a href="https://gist.github.com/ravikyada/e3b9b678cf9f10de53cff4d79aa7c960" class="embed-card">https://gist.github.com/ravikyada/e3b9b678cf9f10de53cff4d79aa7c960</a></div><p> </p>
<p>In this example, we define a bridge network called <code>my-network</code> and connect two services (<code>app</code> and <code>db</code>) to it.</p>
<h2 id="heading-network-security">Network Security:</h2>
<p>Security is an important consideration when configuring communication between containers.</p>
<p>By default, all containers on the same Docker network can communicate with each other, which can create a security risk. Here are some tips for securing your Docker network:</p>
<ol>
<li><p>Use Network Segmentation: Divide your Docker network into multiple subnets to restrict communication between containers. For example, you can create a separate network for your database containers and only allow application containers to access it.</p>
</li>
<li><p>Use Access Control: Use Docker's built-in firewall to restrict traffic between containers. You can use</p>
</li>
</ol>
<p>Conclusion:</p>
<p>In conclusion, Docker networking plays an important role in containerized applications. It provides a seamless and efficient way of communication between containers and the host machine. Docker offers a range of networking options, such as bridge, host, overlay, and macvlan, that can be used to meet various requirements.</p>
<p>Understanding Docker networking is crucial for anyone who works with Docker containers. It helps in managing containerized applications efficiently and effectively. Docker provides a lot of tools to monitor and troubleshoot networking issues, such as Docker inspect, Docker network, and Docker logs.</p>
<p>Overall, Docker networking is a powerful and essential feature of the Docker platform. It allows developers and operations teams to build and deploy complex containerized applications with ease. By mastering Docker networking, you can make your containerized applications more reliable, scalable, and secure.</p>
]]></content:encoded></item><item><title><![CDATA[Basic Useful Commands for Docker]]></title><description><![CDATA[As DevOps continues to revolutionize the software development process, Docker has become an essential tool for developers looking to build and deploy their applications more efficiently.
What is Docker:
Docker is a powerful containerization platform ...]]></description><link>https://hashnode.ravikyada.in/basic-useful-commands-docker</link><guid isPermaLink="true">https://hashnode.ravikyada.in/basic-useful-commands-docker</guid><category><![CDATA[Docker]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Ravi Kyada]]></dc:creator><pubDate>Sat, 15 Apr 2023 13:38:57 GMT</pubDate><content:encoded><![CDATA[<p>As DevOps continues to revolutionize the software development process, Docker has become an essential tool for developers looking to build and deploy their applications more efficiently.</p>
<h3 id="heading-what-is-docker">What is Docker:</h3>
<p>Docker is a powerful containerization platform that allows developers to package their applications and dependencies into a single, portable container.</p>
<p>With Docker, developers can easily move their applications between environments, eliminate compatibility issues, and simplify the deployment process.</p>
<p>However, to fully leverage Docker's capabilities, it's essential to understand some of the Docker components and most useful commands for managing and troubleshooting Docker containers.</p>
<p>In this article, we'll explore some of the most basic and essential Docker components and commands that every DevOps engineer should know.</p>
<h3 id="heading-lets-dive-into-docker-components">Let's Dive into Docker Components:</h3>
<p>Before we dive into Docker commands, let's briefly review some of the key components of Docker. A Complete Overview of the Essential Components of Docker, Including Engine, Images, Containers, Registry, and Compose.</p>
<p><strong>Docker Engine:</strong><br />Docker Engine is the core component of Docker that enables containerization on a host machine. It includes a daemon process (dockerd) and a command-line interface (CLI) client (docker) that communicates with the daemon to manage containers, images, networks, and volumes.</p>
<p><strong>Docker Images:</strong><br />A Docker image is a lightweight, standalone, and executable package that contains everything needed to run an application, including code, libraries, dependencies, and runtime. Images are built from a Dockerfile, which is a script that defines the steps to create the image.</p>
<p><strong>Docker Containers:</strong><br />A Docker container is a running instance of a Docker image. Containers are isolated environments that run on top of the host machine's operating system, with their own filesystem, network, and resources. Containers are lightweight, fast, and can be easily started, stopped, and restarted.</p>
<p><strong>Docker Registry:</strong><br />A Docker registry is a repository that stores Docker images. The most popular Docker registry is Docker Hub, a public registry that hosts thousands of images. However, organizations can also set up private registries to store and share their own images.</p>
<p><strong>Docker Compose:</strong><br />Docker Compose is a tool for defining and running multi-container Docker applications. Compose uses a YAML file to define the services, networks, and volumes required by the application, and can start, stop, and manage the containers as a single unit.</p>
<h2 id="heading-basic-useful-commands-for-docker">Basic Useful Commands for Docker:</h2>
<h3 id="heading-docker-ps"><strong>Docker ps:</strong></h3>
<p>Check All the Running Containers in the machine.</p>
<pre><code class="lang-bash">docker ps
</code></pre>
<p>The docker ps command is used to list all the running containers on your local machine. By default, it only shows the container ID, image name, command, and status of the Running Containers Only.</p>
<p>However, you can use various options with this command to display additional information, such as container names, ports, and network information.</p>
<p><strong>Example:</strong> To display Only the Name and ContainerI'd of the running containers, use the following command</p>
<pre><code class="lang-bash">docker ps --format <span class="hljs-string">"table {{.ID}}\t{{.Names}}"</span>
</code></pre>
<h3 id="heading-docker-run"><strong>Docker run:</strong></h3>
<p>to Run Any Images from the machine.</p>
<p>docker run command is used to create and run a new container from a Docker image. You can use various options with this command to customize the container, such as specifying the image name, container name, ports, and environment variables.</p>
<p><strong>Example 1</strong>: Run a container from images and specify the container name you want to call it. Use -d Flag to Run this container in detached(background) mode:</p>
<pre><code class="lang-bash">docker run -d nginx
</code></pre>
<p><strong>Example 2</strong>: Run a new container from the "nginx" image and map port 80 on the host to port 80 in the container, use the following command:</p>
<pre><code class="lang-bash">docker run -d --name my-nginx -p 80:80 nginx
</code></pre>
<p><strong>Example 3:</strong> Run a container with a mounted volume from the local machine to docker image:</p>
<pre><code class="lang-bash">docker run -d --name mynginx -p 80:80 -v /home/user/demo:/usr/share/nginx/html nginx
</code></pre>
<p>This will run a container from the nginx image and mount the /path/on/host directory on the host to the /path/in/container directory inside the container.</p>
<h3 id="heading-docker-stop"><strong>Docker stop:</strong></h3>
<p>to stop any Running Container in your local machine.</p>
<p>docker stop command is used to stop a running container. You can specify the container ID or container name as an argument to this command.</p>
<p><strong>Example1</strong>: To stop a container with the ID "7d7a18ea75f", use the following command:</p>
<pre><code class="lang-bash">docker stop 7d7a18ea75f
</code></pre>
<p><strong>Example 2</strong>: To stop all the Running containers in your machine.</p>
<pre><code class="lang-bash">docker stop $(docker ps -q)
</code></pre>
<h3 id="heading-docker-rm"><strong>docker rm</strong>:</h3>
<p>the docker rm command is used to remove one or more containers from your local machine. You can specify the container ID or container name as an argument to this command.</p>
<p>you can not remove any running containers. so to remove the container you should first stop that container and then delete that container.</p>
<p><strong>Example 1</strong>: Remove multiple stopped containers in a single command:</p>
<pre><code class="lang-bash">docker rm 7d7a18ea75f 7d7a16740fbn
</code></pre>
<p><strong>Example 2</strong>: Remove All the stopped containers in a single command. The <code>docker ps -a -q</code> command is used to get the IDs of all containers, including stopped ones.</p>
<pre><code class="lang-bash">docker rm $(docker ps -a -q)
</code></pre>
<p><mark>PRO Tip:</mark> By default, when you delete a container using the <code>docker rm</code> command, any volumes that were created for the container will not be deleted. This means that the data stored in those volumes will persist even after the container is deleted.</p>
<p>If you want to delete the volumes associated with a container when you remove the container, you can use the <code>-v</code> option with the <code>docker rm</code> command. For example:</p>
<pre><code class="lang-bash">docker rm -v &lt;container_name_or_id&gt;
</code></pre>
<h3 id="heading-docker-images"><strong>docker images</strong>:</h3>
<p>docker images command is used to list all the Docker images on your local machine. By default, it shows the image ID, repository, tag, and size.</p>
<p><strong>Example 1</strong>: You can use the <code>REPOSITORY</code> argument to filter images by repository name. For example, to list all images in the <code>nginx</code> repository, run the following command:</p>
<pre><code class="lang-bash">docker images nginx
</code></pre>
<p><strong>Example 2</strong>: To display only the repository and tag of the Docker images, use the following command:</p>
<pre><code class="lang-bash">docker images --format <span class="hljs-string">"table {{.Repository}}\t{{.Tag}}"</span>
</code></pre>
<h3 id="heading-docker-build">docker build:</h3>
<p>The docker build command is used to create a Docker image from a <code>Dockerfile</code>. A Dockerfile is a text file that contains a set of instructions that are used to build a Docker image.</p>
<p>The docker build command reads the instructions from the Dockerfile and builds an image according to the specified instructions.</p>
<p>Example: Assuming we have a directory named <code>myapp</code> that contains the Dockerfile and all necessary files, you can build a Docker image using the following command:</p>
<pre><code class="lang-bash">docker build -t demo-image -f Dockerfile .
</code></pre>
<p>In this example, the <code>-t</code> option is used to specify the name and tag of the Docker image that will be created. The <code>demo-image</code> name is used as the name of the image, and the <code>latest</code> is used as the tag. The <code>-f</code> Flag Asks for Dockerfile and it checks all the necessary files for the Dockerfile.</p>
<pre><code class="lang-bash">FROM node:14-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [<span class="hljs-string">"npm"</span>, <span class="hljs-string">"start"</span>]
</code></pre>
<ul>
<li><p><code>FROM</code>: Specifies the base image that will be used to build the new image. In this case, the Node.js 14 Alpine image is used.</p>
</li>
<li><p><code>WORKDIR</code>: Sets the working directory for the following instructions.</p>
</li>
<li><p><code>COPY</code>: Copies the <code>package.json</code> and <code>package-lock.json</code> files to the working directory.</p>
</li>
<li><p><code>RUN</code>: Installs the dependencies specified in the <code>package.json</code> file using the <code>npm install</code> command.</p>
</li>
<li><p><code>COPY</code>: Copies all the files in the current directory to the working directory.</p>
</li>
<li><p><code>EXPOSE</code>: Exposes port 3000 to the outside world.</p>
</li>
<li><p><code>CMD</code>: Specifies the command that will be run when the container is started.</p>
</li>
</ul>
<p>By running the <code>docker build</code> command with the appropriate options, you can build a Docker image from this Dockerfile. Once the image is built, you can use the <code>docker run</code> command to start a new container from the image.</p>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>Docker is a powerful tool for containerizing and deploying applications and mastering Docker commands is essential for any DevOps engineer. By understanding</p>
]]></content:encoded></item><item><title><![CDATA[Deploying External DNS With AWS EKS]]></title><description><![CDATA[Amazon Web Services (AWS) Elastic Kubernetes Service (EKS) is a managed service that simplifies the deployment, scaling, and management of containerized applications using Kubernetes.
One of the key benefits of EKS is its ability to integrate with ot...]]></description><link>https://hashnode.ravikyada.in/deploy-external-dns-aws-eks</link><guid isPermaLink="true">https://hashnode.ravikyada.in/deploy-external-dns-aws-eks</guid><category><![CDATA[Devops]]></category><category><![CDATA[EKS]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Ravi Kyada]]></dc:creator><pubDate>Sat, 15 Apr 2023 12:49:29 GMT</pubDate><content:encoded><![CDATA[<p>Amazon Web Services (AWS) Elastic Kubernetes Service (EKS) is a managed service that simplifies the deployment, scaling, and management of containerized applications using Kubernetes.</p>
<p>One of the key benefits of EKS is its ability to integrate with other AWS services, such as Route 53, to provide DNS resolution for your Kubernetes clusters.</p>
<p>In this article, we will discuss how to deploy External DNS on AWS EKS to automate the creation and deletion of DNS records for Kubernetes services.</p>
<h3 id="heading-overview-of-external-dns">Overview of External DNS:</h3>
<p>External DNS is a Kubernetes add-on that automatically creates and deletes DNS records in an external DNS provider based on Kubernetes service and ingress resources.</p>
<p>This enables you to use custom domain names for your Kubernetes services without having to manually create and update DNS records. External DNS supports a variety of DNS providers, including Route 53, Google Cloud DNS, Azure DNS, and more.</p>
<h2 id="heading-deploying-external-dns-on-aws-eks">Deploying External DNS on AWS EKS:</h2>
<p>To deploy External DNS on AWS EKS, you will need to follow the steps below:</p>
<h3 id="heading-create-an-iam-policy"><strong>Create an IAM policy</strong></h3>
<div class="gist-block embed-wrapper" data-gist-show-loading="false" data-id="da265ef115e5838bfedac9201db0a8dd"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a href="https://gist.github.com/ravikyada/da265ef115e5838bfedac9201db0a8dd" class="embed-card">https://gist.github.com/ravikyada/da265ef115e5838bfedac9201db0a8dd</a></div><p> </p>
<p>First, you need to create an IAM policy that grants External DNS permissions to manage Route 53 resources. You can use the following policy as a starting point:</p>
<p>Save this policy as a JSON file, such as “external-dns-iam-policy.json” and Create a Policy in AWS IAM Dashboard. Now With this page copy the ARN of this Policy as we need to use this in the upcoming command.</p>
<h3 id="heading-create-iam-service-account-with-the-policy-you-just-created"><strong>Create IAM Service Account with the Policy you just Created</strong></h3>
<p>Next, you need to create an IAM Service Account that External DNS can assume to manage Route 53 resources. You can use the following command to create the role:</p>
<p>For that, you need to have eksctl configured in your system.</p>
<pre><code class="lang-bash">eksctl create iamserviceaccount --name --namespace --cluster --attach-policy-arn --approve --profile
</code></pre>
<p>Please Replace the Values between hyphens &lt;&gt; in this command, Here I had given you Sample Command. Please be sure you are using the same namespace during the whole tutorial.</p>
<pre><code class="lang-bash">eksctl create iamserviceaccount --name external-dns --namespace default --cluster demo-cluster --attach-policy-arnarn:aws:iam::012345678901:policy/ExternalDNSPolicy--approve --profile demo-keys
</code></pre>
<p>In Backend, this command will create a Cloudformation stack in your was account. so you can check the status and resources created by this command in the AWS cloud formation stack Dashboard.</p>
<p>After Completing this Cloudformation stack, you can check ServiceAccount created in the default namespace.</p>
<pre><code class="lang-bash">kubectl get sa -n default
</code></pre>
<p>Now We are done with the AWS Side Configurations, Now let’s have some work done for the Kubernetes side to give Cluster access to change the DNS Changes at Route53 Hosted Zone from Kubernetes Files.</p>
<h3 id="heading-create-clusterrole-and-clusterrole-binding">Create ClusterRole and ClusterRole Binding:</h3>
<p>Let’s Create Cluster Role and Clusterrole binding files so that external-dns-controller can do change to the Route53 DNS.</p>
<div class="gist-block embed-wrapper" data-gist-show-loading="false" data-id="c90064e5b90598592b14076491ece435"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a href="https://gist.github.com/ravikyada/c90064e5b90598592b14076491ece435" class="embed-card">https://gist.github.com/ravikyada/c90064e5b90598592b14076491ece435</a></div><p> </p>
<p>Now Let’s Create a ClusterRoleBinding.yaml file for the Clusterrole we created.</p>
<div class="gist-block embed-wrapper" data-gist-show-loading="false" data-id="c2273191c26b9b789fd36537cf2eb1ff"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a href="https://gist.github.com/ravikyada/c2273191c26b9b789fd36537cf2eb1ff" class="embed-card">https://gist.github.com/ravikyada/c2273191c26b9b789fd36537cf2eb1ff</a></div><p> </p>
<p>Now, For the Acess, we have done all the work with Cluster and cluster role binding, Just Apply both files with the command.</p>
<pre><code class="lang-bash">kubectl apply -f &lt;filename.yaml&gt;
</code></pre>
<p>Now we will create One Deployment File for external-dns-controller which will do all the work for us.</p>
<h3 id="heading-create-a-deployment-file-for-external-dns-controller">Create a Deployment file for External DNS Controller</h3>
<div class="gist-block embed-wrapper" data-gist-show-loading="false" data-id="ff714ed1ecc37fc329f894774906fe24"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a href="https://gist.github.com/ravikyada/ff714ed1ecc37fc329f894774906fe24" class="embed-card">https://gist.github.com/ravikyada/ff714ed1ecc37fc329f894774906fe24</a></div><p> </p>
<p>This Deployment Managed Pod will deploy the External DNS controller in your Kubernetes cluster. The controller will monitor your Kubernetes services and ingresses and automatically create and delete Route 53 DNS records as necessary.</p>
<p>After Deploying this Deployment File Now Check for the Deployment with 1 Pod is Up And Running.</p>
<pre><code class="lang-bash">kubectl get deployments
</code></pre>
<p>Also, You can check the Logs of the Pod Created by the Deployment By:</p>
<pre><code class="lang-bash">kubectl logs &lt;pod name&gt; -n default
</code></pre>
<p>Now, After Full Deployment, you are Ready with the External DNS and you don’t need to add External DNS to all your API Of the Cluster Services. It will automatically add all your Services and Ingresses automatically to the Route53 Hosted Zone.</p>
<h3 id="heading-configuring-external-dns-in-k8s-files"><strong>Configuring External DNS in K8s Files</strong></h3>
<p>Once External DNS is deployed, you can configure it to create and manage DNS records for your Kubernetes services. To do this, you can add annotations to your Kubernetes services or ingresses that specify the desired DNS name and DNS provider.</p>
<p>For example, to create a DNS record for a Kubernetes service named “my-service” with a DNS name of “my-service.example.com” in Route 53, you can add the following annotation to your service:</p>
<div class="gist-block embed-wrapper" data-gist-show-loading="false" data-id="84c02434dfbef47915cf52334e03fa34"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a href="https://gist.github.com/ravikyada/84c02434dfbef47915cf52334e03fa34" class="embed-card">https://gist.github.com/ravikyada/84c02434dfbef47915cf52334e03fa34</a></div><p> </p>
<p>Congrats! You have successfully deployed external DNS with AWS EKS. With external DNS, you can now easily expose your Kubernetes services to the internet or other parts of your infrastructure by automatically creating DNS records.</p>
<p>Remember to regularly monitor your DNS records to ensure that they are accurate and up-to-date. You can also customize external DNS to meet your specific needs by modifying the configuration options.</p>
<p>Overall, deploying external DNS with AWS EKS is a straightforward process that can provide significant benefits for your applications. With this guide, you should be well-equipped to get started with external DNS and take advantage of its powerful features.</p>
]]></content:encoded></item></channel></rss>