
-
Setting Line Length in CSS (and Fitting Text to a Container)
by: Daniel Schwarz Mon, 14 Jul 2025 12:38:23 +0000 First, what is line length? Line length is the length of a container that holds a body of multi-line text. “Multi-line” is the key part here, because text becomes less readable if the beginning of a line of text is too far away from the end of the prior line of text. This causes users to reread lines by mistake, and generally get lost while reading. Luckily, the Web Content Accessibility Guidelines (WCAG) gives us a pretty hard rule to follow: no more than 80 characters on a line (40 if the language is Chinese, Japanese, or Korean), which is super easy to implement using character (ch) units: width: 80ch; The width of 1ch is equal to the width of the number 0 in your chosen font, so the exact width depends on the font. Setting the optimal line length Just because you’re allowed up to 80 characters on a line, it doesn’t mean that you have to aim for that number. A study by the Baymard Institute revealed that a line length of 50-75 characters is the optimal length — this takes into consideration that smaller line lengths mean more lines and, therefore, more opportunities for users to make reading mistakes. That being said, we also have responsive design to think about, so setting a minimum width (e.g., min-width: 50ch) isn’t a good idea because you’re unlikely to fit 50 characters on a line with, for example, a screen/window size that is 320 pixels wide. So, there’s a bit of nuance involved, and the best way to handle that is by combining the clamp() and min() functions: clamp(): Set a fluid value that’s relative to a container using percentage, viewport, or container query units, but with minimum and maximum constraints. min(): Set the smallest value from a list of comma-separated values. Let’s start with min(). One of the arguments is 93.75vw. Assuming that the container extends across the whole viewport, this’d equal 300px when the viewport width is 320px (allowing for 20px of spacing to be distributed as you see fit) and 1350px when the viewport width is 1440px. However, for as long as the other argument (50ch) is the smallest of the two values, that’s the value that min() will resolve to. min(93.75vw, 50ch); Next is clamp(), which accepts three arguments in the following order: the minimum, preferred, and maximum values. This is how we’ll set the line length. For the minimum, you’d plug in your min() function, which sets the 50ch line length but only conditionally. For the maximum, I suggest 75ch, as mentioned before. The preferred value is totally up to you — this will be the width of your container when not hitting the minimum or maximum. width: clamp(min(93.75vw, 50ch), 70vw, 75ch); In addition, you can use min(), max(), and calc() in any of those arguments to add further nuance. If the container feels too narrow, then the font-size might be too large. If it feels too wide, then the font-size might be too small. Fit text to container (with JavaScript) You know that design trend where text is made to fit the width of a container? Typically, to utilize as much of the available space as possible? You’ll often see it applied to headings on marketing pages and blog posts. Well, Chris wrote about it back in 2018, rounding up several ways to achieve the effect with JavaScript or jQuery, unfortunately with limitations. However, the ending reveals that you can just use SVG as long as you know the viewBox values, and I actually have a trick for getting them. Although it still requires 3-5 lines of JavaScript, it’s the shortest method I’ve found. It also slides into HTML and CSS perfectly, particularly since the SVG inherits many CSS properties (including the color, thanks to fill: currentColor): CodePen Embed Fallback <h1 class="container"> <svg> <text>Fit text to container</text> </svg> </h1> h1.container { /* Container size */ width: 100%; /* Type styles (<text> will inherit most of them) */ font: 900 1em system-ui; color: hsl(43 74% 3%); text { /* We have to use fill: instead of color: here But we can use currentColor to inherit the color */ fill: currentColor; } } /* Select all SVGs */ const svg = document.querySelectorAll("svg"); /* Loop all SVGs */ svg.forEach(element => { /* Get bounding box of <text> element */ const bbox = element.querySelector("text").getBBox(); /* Apply bounding box values to SVG element as viewBox */ element.setAttribute("viewBox", [bbox.x, bbox.y, bbox.width, bbox.height].join(" ")); }); Fit text to container (pure CSS) If you’re hell-bent on a pure-CSS method, you are in luck. However, despite the insane things that we can do with CSS these days, Roman Komarov’s fit-to-width hack is a bit complicated (albeit rather impressive). Here’s the gist of it: The text is duplicated a couple of times (although hidden accessibly with aria-hidden and hidden literally with visibility: hidden) so that we can do math with the hidden ones, and then apply the result to the visible one. Using container queries/container query units, the math involves dividing the inline size of the text by the inline size of the container to get a scaling factor, which we then use on the visible text’s font-size to make it grow or shrink. To make the scaling factor unitless, we use the tan(atan2()) type-casting trick. Certain custom properties must be registered using the @property at-rule (otherwise they don’t work as intended). The final font-size value utilizes clamp() to set minimum and maximum font sizes, but these are optional. <span class="text-fit"> <span> <span class="text-fit"> <span><span>fit-to-width text</span></span> <span aria-hidden="true">fit-to-width text</span> </span> </span> <span aria-hidden="true">fit-to-width text</span> </span> .text-fit { display: flex; container-type: inline-size; --captured-length: initial; --support-sentinel: var(--captured-length, 9999px); & > [aria-hidden] { visibility: hidden; } & > :not([aria-hidden]) { flex-grow: 1; container-type: inline-size; --captured-length: 100cqi; --available-space: var(--captured-length); & > * { --support-sentinel: inherit; --captured-length: 100cqi; --ratio: tan( atan2( var(--available-space), var(--available-space) - var(--captured-length) ) ); --font-size: clamp( 1em, 1em * var(--ratio), var(--max-font-size, infinity * 1px) - var(--support-sentinel) ); inline-size: var(--available-space); &:not(.text-fit) { display: block; font-size: var(--font-size); @container (inline-size > 0) { white-space: nowrap; } } /* Necessary for variable fonts that use optical sizing */ &.text-fit { --captured-length2: var(--font-size); font-variation-settings: "opsz" tan(atan2(var(--captured-length2), 1px)); } } } } @property --captured-length { syntax: "<length>"; initial-value: 0px; inherits: true; } @property --captured-length2 { syntax: "<length>"; initial-value: 0px; inherits: true; } CodePen Embed Fallback Watch for new text-grow/text-shrink properties To make fitting text to a container possible in just one line of CSS, a number of solutions have been discussed. The favored solution seems to be two new text-grow and text-shrink properties. Personally, I don’t think we need two different properties. In fact, I prefer the simpler alternative, font-size: fit-width, but since text-grow and text-shrink are already on the table (Chrome intends to prototype and you can track it), let’s take a look at how they could work. The first thing that you need to know is that, as proposed, the text-grow and text-shrink properties can apply to multiple lines of wrapped text within a container, and that’s huge because we can’t do that with my JavaScript technique or Roman’s CSS technique (where each line needs to have its own container). Both have the same syntax, and you’ll need to use both if you want to allow both growing and shrinking: text-grow: <fit-target> <fit-method>? <length>?; text-shrink: <fit-target> <fit-method>? <length>?; <fit-target> per-line: For text-grow, lines of text shorter than the container will grow to fit it. For text-shrink, lines of text longer than the container will shrink to fit it. consistent: For text-grow, the shortest line will grow to fit the container while all other lines grow by the same scaling factor. For text-shrink, the longest line will shrink to fit the container while all other lines shrink by the same scaling factor. <fit-method> (optional) scale: Scale the glyphs instead of changing the font-size. scale-inline: Scale the glyphs instead of changing the font-size, but only horizontally. font-size: Grow or shrink the font size accordingly. (I don’t know what the default value would be, but I imagine this would be it.) letter-spacing: The letter spacing will grow/shrink instead of the font-size. <length> (optional): The maximum font size for text-grow or minimum font size for text-shrink. Again, I think I prefer the font-size: fit-width approach as this would grow and shrink all lines to fit the container in just one line of CSS. The above proposal does way more than I want it to, and there are already a number of roadblocks to overcome (many of which are accessibility-related). That’s just me, though, and I’d be curious to know your thoughts in the comments. Conclusion It’s easier to set line length with CSS now than it was a few years ago. Now we have character units, clamp() and min() (and max() and calc() if you wanted to throw those in too), and wacky things that we can do with SVGs and CSS to fit text to a container. It does look like text-grow and text-shrink (or an equivalent solution) are what we truly need though, at least in some scenarios. Until we get there, this is a good time to weigh-in, which you can do by adding your feedback, tests, and use-cases to the GitHub issue. Setting Line Length in CSS (and Fitting Text to a Container) originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
The Layout Maestro Course
by: Geoff Graham Fri, 11 Jul 2025 17:07:13 +0000 Layout. It’s one of those easy-to-learn, difficult-to-master things, like they say about playing bass. Not because it’s innately difficult to, say, place two elements next to each other, but because there are many, many ways to tackle it. And layout is one area of CSS that seems to evolve more than others, as we’ve seen in the past 10-ish years with the Flexbox, CSS Grid, Subgrid, and now Masonry to name but a few. May as well toss in Container Queries while we’re at it. And reading flow. And… That’s a good way to start talking about a new online course that Ahmad Shadeed is planning to release called The Layout Maestro. I love that name, by the way. It captures exactly how I think about working with layouts: orchestrating how and where things are arranged on a page. Layouts are rarely static these days. They are expected to adapt to the user’s context, not totally unlike a song changing keys. Ahmad is the perfect maestro to lead a course on layout, as he does more than most when it comes to experimenting with layout features and demonstrating practical use cases, as you may have already seen in his thorough and wildly popular interactive guides on Container Queries, grid areas, box alignment, and positioning (just to name a few). The course is still in development, but you can get a leg up and sign up to be notified by email when it’s ready. That’s literally all of the information I have at this point, but I still feel compelled to share it and encourage you to sign up for updates because I know few people more qualified to wax on about CSS layout than Ahmad and am nothing but confident that it will be great, worth the time, and worth the investment. I’m also learning that I have a really hard time typing “maestro” correctly. 🤓 The Layout Maestro Course originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
Scroll-Driven Sticky Heading
by: Amit Sheen Fri, 11 Jul 2025 12:43:59 +0000 Scroll-driven animations are great! They’re a powerful tool that lets developers tie the movement and transformation of elements directly to the user’s scroll position. This technique opens up new ways to create interactive experiences, cuing images to appear, text to glide across the stage, and backgrounds to subtly shift. Used thoughtfully, scroll-driven animations (SDA) can make your website feel more dynamic, engaging, and responsive. A few weeks back, I was playing around with scroll-driven animations, just searching for all sorts of random things you could do with it. That’s when I came up with the idea to animate the text of the main heading (h1) and, using SDA, change the heading itself based on the user’s scroll position on the page. In this article, we’re going to break down that idea and rebuild it step by step. This is the general direction we’ll be heading in, which looks better in full screen and viewed in a Chromium browser: CodePen Embed Fallback It’s important to note that the effect in this example only works in browsers that support scroll-driven animations. Where SDA isn’t supported, there’s a proper fallback to static headings. From an accessibility perspective, if the browser has reduced motion enabled or if the page is being accessed with assistive technology, the effect is disabled and the user gets all the content in a fully semantic and accessible way. Just a quick note: this approach does rely on a few “magic numbers” for the keyframes, which we’ll talk about later on. While they’re surprisingly responsive, this method is really best suited for static content, and it’s not ideal for highly dynamic websites. Closer Look at the Animation Before we dive into scroll-driven animations, let’s take a minute to look at the text animation itself, and how it actually works. This is based on an idea I had a few years back when I wanted to create a typewriter effect. At the time, most of the methods I found involved animating the element’s width, required using a monospace font, or a solid color background. None of which really worked for me. So I looked for a way to animate the content itself, and the solution was, as it often is, in pseudo-elements. CodePen Embed Fallback Pseudo-elements have a content property, and you can (kind of) animate that text. It’s not exactly animation, but you can change the content dynamically. The cool part is that the only thing that changes is the text itself, no other tricks required. Start With a Solid Foundation Now that you know the trick behind the text animation, let’s see how to combine it with a scroll-driven animation, and make sure we have a solid, accessible fallback as well. We’ll start with some basic semantic markup. I’ll wrap everything in a main element, with individual sections inside. Each section gets its own heading and content, like text and images. For this example, I’ve set up four sections, each with a bit of text and some images, all about Primary Colors. <main> <section> <h1>Primary Colors</h1> <p>The three primary colors (red, blue, and yellow) form the basis of all other colors on the color wheel. Mixing them in different combinations produces a wide array of hues.</p> <img src="./colors.jpg" alt="...image description"> </section> <section> <h2>Red Power</h2> <p>Red is a bold and vibrant color, symbolizing energy, passion, and warmth. It easily attracts attention and is often linked with strong emotions.</p> <img src="./red.jpg" alt="...image description"> </section> <section> <h2>Blue Calm</h2> <p>Blue is a calm and cool color, representing tranquility, stability, and trust. It evokes images of the sky and sea, creating a peaceful mood.</p> <img src="./blue.jpg" alt="...image description"> </section> <section> <h2>Yellow Joy</h2> <p>Yellow is a bright and cheerful color, standing for light, optimism, and creativity. It is highly visible and brings a sense of happiness and hope.</p> <img src="./yellow.jpg" alt="...image description"> </section> </main> As for the styling, I’m not doing anything special at this stage, just the basics. I changed the font and adjusted the text and heading sizes, set up the display for the main and the sections, and fixed the image sizes with object-fit. CodePen Embed Fallback So, at this point, we have a simple site with static, semantic, and accessible content, which is great. Now the goal is to make sure it stays that way as we start adding our effect. The Second First Heading We’ll start by adding another h1 element at the top of the main. This new element will serve as the placeholder for our animated text, updating according to the user’s scroll position. And yes, I know there’s already an h1 in the first section; that’s fine and we’ll address it in a moment so that only one is accessible at a time. <h1 class="scrollDrivenHeading" aria-hidden="true">Primary Colors</h1> Notice that I’ve added aria-hidden="true" to this heading, so it won’t be picked up by screen readers. Now I can add a class specifically for screen readers, .srOnly, to all the other headings. This way, anyone viewing the content “normally” will see only the animated heading, while assistive technology users will get the regular, static semantic headings. CodePen Embed Fallback Note: The style for the .srOnly class is based on “Inclusively Hidden” by Scott O’Hara. Handling Support As much as accessibility matters, there’s another concern we need to keep in mind: support. CSS Scroll-Driven Animations are fantastic, but they’re still not fully supported everywhere. That’s why it’s important to provide the static version for browsers that don’t support SDA. The first step is to hide the animated heading we just added using display: none. Then, we’ll add a new @supports block to check for SDA support. Inside that block, where SDA is supported, we can change back the display for the heading. The .srOnly class should also move into the @supports block, since we only want it to apply when the effect is active, not when it’s not supported. This way, just like with assistive technology, anyone visiting the page in a browser without SDA support will still get the static content. .scrollDrivenHeading { display: none; } @supports (animation-timeline: scroll()) { .scrollDrivenHeading { display: block; } /* Screen Readers Only */ .srOnly { clip: rect(0 0 0 0); clip-path: inset(50%); height: 1px; overflow: hidden; position: absolute; white-space: nowrap; width: 1px; } } Get Sticky The next thing we need to do is handle the stickiness of the heading. To make sure the heading always stays on screen, we’ll set its position to sticky with top: 0 so it sticks to the top of the viewport. While we’re at it, let’s add some basic styling, including a background so the text doesn’t blend with whatever’s behind the heading, a bit of padding for spacing, and white-space: nowrap to keep the heading on a single line. /* inside the @supports block */ .scrollDrivenHeading { display: block; position: sticky; top: 0; background-image: linear-gradient(0deg, transparent, black 1em); padding: 0.5em 0.25em; white-space: nowrap; } Now everything’s set up: in normal conditions, we’ll see a single sticky heading at the top of the page. And if someone uses assistive technology or a browser that doesn’t support SDA, they’ll still get the regular static content. CodePen Embed Fallback Now we’re ready to start animating the text. Almost… The Magic Numbers To build the text animation, we need to know exactly where the text should change. With SDA, scrolling basically becomes our timeline, and we have to determine the exact points on that timeline to trigger the animation. To make this easier, and to help you pinpoint those positions, I’ve prepared the following script: @property --scroll-position { syntax: "<number>"; inherits: false; initial-value: 0; } body::after { counter-reset: sp var(--scroll-position); content: counter(sp) "%"; position: fixed; top: 0; left: 0; padding: 1em; background-color: maroon; animation: scrollPosition steps(100); animation-timeline: scroll(); } @keyframes scrollPosition { 0% { --scroll-position: 0; } 100% { --scroll-position: 100; } } I don’t want to get too deep into this code, but the idea is to take the same scroll timeline we’ll use next to animate the text, and use it to animate a custom property (--scroll-position) from 0 to 100 based on the scroll progress, and display that value in the content. If we’ll add this at the start of our code, we’ll see a small red square in the top-left corner of the screen, showing the current scroll position as a percentage (to match the keyframes). This way, you can scroll to any section you want and easily mark the percentage where each heading should begin. CodePen Embed Fallback With this method and a bit of trial and error, I found that I want the headings to change at 30%, 60%, and 90%. So, how do we actually do it? Let’s start animating. Animating Text First, we’ll clear out the content inside the .scrollDrivenHeading element so it’s empty and ready for dynamic content. In the CSS, I’ll add a pseudo-element to the heading, which we’ll use to animate the text. We’ll give it empty content, set up the animation-name, and of course, assign the animation-timeline to scroll(). And since I’m animating the content property, which is a discrete type, it doesn’t transition smoothly between values. It just jumps from one to the next. By setting the animation-timing-function property to step-end, I make sure each change happens exactly at the keyframe I define, so the text switches precisely where I want it to, instead of somewhere in between. .scrollDrivenHeading { /* style */ &::after { content: ''; animation-name: headingContent; animation-timing-function: step-end; animation-timeline: scroll(); } } As for the keyframes, this part is pretty straightforward (for now). We’ll set the first frame (0%) to the first heading, and assign the other headings to the percentages we found earlier. @keyframes headingContent { 0% { content: 'Primary Colors'} 30% { content: 'Red Power'} 60% { content: 'Blue Calm'} 90%, 100% { content: 'Yellow Joy'} } So, now we’ve got a site with a sticky heading that updates as you scroll. CodePen Embed Fallback But wait, right now it just switches instantly. Where’s the animation?! Here’s where it gets interesting. Since we’re not using JavaScript or any string manipulation, we have to write the keyframes ourselves. The best approach is to start from the target heading you want to reach, and build backwards. So, if you want to animate between the first and second heading, it would look like this: @keyframes headingContent { 0% { content: 'Primary Colors'} 9% { content: 'Primary Color'} 10% { content: 'Primary Colo'} 11% { content: 'Primary Col'} 12% { content: 'Primary Co'} 13% { content: 'Primary C'} 14% { content: 'Primary '} 15% { content: 'Primary'} 16% { content: 'Primar'} 17% { content: 'Prima'} 18% { content: 'Prim'} 19% { content: 'Pri'} 20% { content: 'Pr'} 21% { content: 'P'} 22% { content: 'R'} 23% { content: 'Re'} 24% { content: 'Red'} 25% { content: 'Red '} 26% { content: 'Red P'} 27% { content: 'Red Po'} 28%{ content: 'Red Pow'} 29% { content: 'Red Powe'} 30% { content: 'Red Power'} 60% { content: 'Blue Calm'} 90%, 100% { content: 'Yellow Joy'} } I simply went back by 1% each time, removing or adding a letter as needed. Note that in other cases, you might want to use a different step size, and not always 1%. For example, on longer headings with more words, you’ll probably want smaller steps. If we repeat this process for all the other headings, we’ll end up with a fully animated heading. CodePen Embed Fallback User Preferences We talked before about accessibility and making sure the content works well with assistive technology, but there’s one more thing you should keep in mind: prefers-reduced-motion. Even though this isn’t a strict WCAG requirement for this kind of animation, it can make a big difference for people with vestibular sensitivities, so it’s a good idea to offer a way to show the content without animations. If you want to provide a non-animated alternative, all you need to do is wrap your @supports block with a prefers-reduced-motion query: @media screen and (prefers-reduced-motion: no-preference) { @supports (animation-timeline: scroll()) { /* style */ } } Leveling Up Let’s talk about variations. In the previous example, we animated the entire heading text, but we don’t have to do that. You can animate just the part you want, and use additional animations to enhance the effect and make things more interesting. For example, here I kept the text “Primary Color” fixed, and added a span after it that handles the animated text. <h1 class="scrollDrivenHeading" aria-hidden="true"> Primary Color<span></span> </h1> And since I now have a separate span, I can also animate its color to match each value. CodePen Embed Fallback In the next example, I kept the text animation on the span, but instead of changing the text color, I added another scroll-driven animation on the heading itself to change its background color. This way, you can add as many animations as you want and change whatever you like. CodePen Embed Fallback Your Turn! CSS Scroll-Driven Animations are more than just a cool trick; they’re a game-changer that opens the door to a whole new world of web design. With just a bit of creativity, you can turn even the most ordinary pages into something interactive, memorable, and truly engaging. The possibilities really are endless, from subtle effects that enhance the user experience, to wild, animated transitions that make your site stand out. So, what would you build with scroll-driven animations? What would you create with this new superpower? Try it out, experiment, and if you come up with something cool, have some ideas, wild experiments, or even weird failures, I’d love to hear about them. I’m always excited to see what others come up with, so feel free to share your work, questions, or feedback below. Special thanks to Cristian Díaz for reviewing the examples, making sure everything is accessible, and contributing valuable advice and improvements. Scroll-Driven Sticky Heading originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
LHB Linux Digest #25.17: AWK Series, at command, Docker Copy, Pangolin and More
by: Abhishek Prakash Fri, 11 Jul 2025 18:12:43 +0530 I told you about the AWK tutorial series in the previous newsletter. Well, it has an awkward start. I thought I would be able to finish, but I could only complete the first three chapters. Accept my apologies. I have the additional responsibilities of a month-old child now 😊 Still, please feel free to explore this work in progress and share your feedback. Mastering AWK as a Linux System AdministratorTransform from basic text processing to advanced data manipulation in 10 comprehensive chapters.Linux HandbookAbhishek PrakashFor some reasons, the SVGs of command replay are not displaying properly. I'll be reuploading them as GIFs/videos over the weekend. This post is for subscribers only Subscribe now Already have an account? Sign in
-
Chapter 3: Built-in Variables and Field Manipulation
by: Abhishek Prakash Fri, 11 Jul 2025 17:37:02 +0530 You already saw a few built-in variables in the first chapter. Let's have a look at some other built-in variables along with the ones you already saw. Repitition is good for reinforced learning. Sample Data Files Let me create some sample files for you to work with. Save these to follow along the tutorial on your system: Create access.log: 192.168.1.100 - alice [29/Jun/2024:10:15:22] "GET /index.html" 200 1234 192.168.1.101 - bob [29/Jun/2024:10:16:45] "POST /api/login" 200 567 192.168.1.102 - charlie [29/Jun/2024:10:17:10] "GET /images/logo.png" 404 0 10.0.0.50 - admin [29/Jun/2024:10:18:33] "GET /admin/panel" 403 892 192.168.1.100 - alice [29/Jun/2024:10:19:55] "GET /profile" 200 2456 Create inventory.csv: laptop,Dell,XPS13,1299.99,5 desktop,HP,Pavilion,899.50,3 tablet,Apple,iPad,599.00,8 monitor,Samsung,27inch,349.99,12 keyboard,Logitech,MX Keys,99.99,15 FS (Field Separator): How you split your dataYou have already used FS before. FS tells AWK how to slice each line into fields - think of it as choosing the right places to cut your data. Default whitespace splittingBy default, the field separator is white space (space, tab etc). Let's extract user information from our access log: awk '{print "IP:", $1, "User:", $3, "Status:", $7}' access.log It automatically splits on spaces and extracts IP address, username, and HTTP status code. Output: IP: 192.168.1.100 User: alice Status: 200 IP: 192.168.1.101 User: bob Status: 200 IP: 192.168.1.102 User: charlie Status: 404 IP: 10.0.0.50 User: admin Status: 403 IP: 192.168.1.100 User: alice Status: 200 Default Whitespace SplittingCustom field separatorsNow let's process our CSV inventory. Here we define that we have to cut the data at every comma with -F,: awk -F, '{print $1, "by", $2, "costs $" $4}' inventory.csv In this example, it uses comma as a separator to extract product type, manufacturer, and price from CSV. laptop by Dell costs $1299.99 desktop by HP costs $899.50 tablet by Apple costs $599.00 monitor by Samsung costs $349.99 keyboard by Logitech costs $99.99 Custom Field Separator💡 You can also handle multiple separators. Create mixed_data.txt: server01::cpu::75::memory::4096 web02|admin|active|192.168.1.10 db-server,mysql,running,8192,16 cache:redis:online:1024 Now let's work on it. awk -F'[:|,]' '{print "Server:", $1, "Service:", $2, "Info:", $4}' mixed_data.txt It uses a character class to split on colons, pipes, or commas, thus handling inconsistent delimiters. Server: server01 Service: Info: 75 Server: web02 Service: admin Info: 192.168.1.10 Server: db-server Service: mysql Info: 8192 Server: cache Service: redis Info: 1024 💡Newer version of gawk (GNU AWK) has --csv option to better deal with CSV files as some fields may contain comma inside quotes.OFS (Output Field Separator): How you join your dataOFS controls how fields appear in your output - it's like choosing the glue between your data pieces. Let's convert our space-separated log to CSV: awk 'BEGIN {OFS=","} {print $3, $1, $7}' access.log It will set the output separator to comma and create CSV with username, IP, and status. alice,192.168.1.100,200 bob,192.168.1.101,200 charlie,192.168.1.102,404 admin,10.0.0.50,403 alice,192.168.1.100,200 Output Field SeparatorOf course, you can simply use awk '{print $3 "," $1 "," $7}' access.log to achieve the same output, but that's not the point here. 📋BEGIN is a special block that ensures your formatting is set up correctly before any data processing begins, making it perfect for this type of data transformation task. You can also use it without BEGIN: awk -v OFS="," '{print $3, $1, $7}' access.logSimilarly, let's change our inventory csv to a pipe-delimited report: awk -F, 'BEGIN {OFS="|"} {print $2, $3, $4, $5}' inventory.csv Here's what it would look like: Dell|XPS13|1299.99|5 HP|Pavilion|899.50|3 Apple|iPad|599.00|8 Samsung|27inch|349.99|12 Logitech|MX Keys|99.99|15 CSV to Pipe-delimited reportNote that the original files are not touched. You see the output on STDOUT. They are not written on the input file. RS (Record Separator): How you define recordsRS tells AWK where one record ends and another begins. We'll use a new sample file multiline_records.txt: Name: John Smith Age: 35 Department: Engineering Salary: 75000 Name: Mary Johnson Age: 42 Department: Marketing Salary: 68000 Name: Bob Wilson Age: 28 Department: Sales Salary: 55000 Process these paragraph-style records with: awk 'BEGIN {RS=""; FS="\n"} { name = substr($1, 7) age = substr($2, 6) dept = substr($3, 13) salary = substr($4, 9) print name, age, dept, salary }' multiline_records.txt It is a bit complicated, but assuming that you are terating data files, it will be worth the effort. Here, awk treats empty lines as record separators and each line (\n) within a record as a field, then extracts the values after the colons. Look at the formatted output now: John Smith 35 Engineering 75000 Mary Johnson 42 Marketing 68000 Bob Wilson 28 Sales 55000 ORS (Output Record Separator): How you end recordsORS controls what goes at the end of each output record - think of it as choosing your punctuation mark. For example, if you use this command with inventory.csv file: awk -F, 'BEGIN {ORS=" | "} {print $1}' inventory.csv It will replace newlines with " | " to create a continuous horizontal list of product types. laptop | desktop | tablet | monitor | keyboard | A more practical, real-world use case would be to add HTML line breaks to your log output so that it is displayed properly in a web browser: awk 'BEGIN {ORS="<br>\n"} {print $3, "accessed at", $2}' access.log Here's the output and feel free to parse it as HTML alice accessed at -<br> bob accessed at -<br> charlie accessed at -<br> admin accessed at -<br> alice accessed at -<br> NR (Number of Records): Your line counterHonestly, I like to remember it as number of rows. NR tracks which record you're currently processing - like a page number, I mean line number ;) Add line numbers to the inventory file: awk '{printf "%2d: %s\n", NR, $0}' inventory.csv It prints a formatted line number followed by the original line. Deja Vu? We have seen this in the first chapter, too. 1: laptop,Dell,XPS13,1299.99,5 2: desktop,HP,Pavilion,899.50,3 3: tablet,Apple,iPad,599.00,8 4: monitor,Samsung,27inch,349.99,12 5: keyboard,Logitech,MX Keys,99.99,15 Now a better idea would to use this information to dela with specific lines only. awk -F, 'NR >= 2 && NR <= 4 {print "Item " NR ":", $1, $3}' inventory.csv So now, AWK will process only lines 2-4, extracting product type and model. Item 2: desktop Pavilion Item 3: tablet iPad Item 4: monitor 27inch NF (Number of Fields): Your column counterNF tells you how many fields are in each record (row/line). This is excellent when you have to loop on data (discussed in later chapters) or have to get the last column/field for processing. Create variable_fields.txt: web01 active db02 maintenance scheduled friday cache01 offline backup01 running full-backup nightly api-server online load-balanced Let's work on this data file and make it display the number of fields in each line: awk '{print "Server " $1 " has " NF " fields:", $0}' variable_fields.txt As you can see, it displays the number of fields: Server web01 has 2 fields: web01 active Server db02 has 4 fields: db02 maintenance scheduled friday Server cache01 has 2 fields: cache01 offline Server backup01 has 4 fields: backup01 running full-backup nightly Server api-server has 3 fields: api-server online load-balanced Let's take another example where it always prints the last field irrespective of the number of fields: awk '{print $1 ":", $NF}' variable_fields.txt Works fine, right? web01: active db02: friday cache01: offline backup01: nightly api-server: load-balanced 📋There is no check on the number of columns. If a line has only 5 fields and you want to display the 6th field, it will show blank. There won't be any error.FILENAME: Your file trackerFILENAME shows which file is being processed. This is essential when you handle multiple files. Create these log files: server1.log: ERROR: Database connection failed WARN: High memory usage INFO: Backup completed server2.log: ERROR: Network timeout INFO: Service restarted ERROR: Disk space low Track errors across multiple files but also include from which file the output line is coming from by printing FILENAME: awk '/ERROR/ {print FILENAME ":", $0}' server1.log server2.log As you can see, it finds all ERROR lines and shows which file they came from. server1.log: ERROR: Database connection failed server2.log: ERROR: Network timeout server2.log: ERROR: Disk space low FNR (File Number of Records): Your per-file counterAnother in-built AWK variable that helps while dealing with multiple files. FNR resets to 1 for each new file. Imagine a situation where you have two files to deal with AWK. If you use NR, it will count the number of rows from both files together. FNR on the other hand, will give you the number of records from each file. Let's take an example: awk '{print FILENAME, "line", FNR, "(overall line", NR "):", $0}' server1.log server2.log It shows both the line number within each file (FNR) and the overall line number (NR) across all files. server1.log line 1 (overall line 1): ERROR: Database connection failed server1.log line 2 (overall line 2): WARN: High memory usage server1.log line 3 (overall line 3): INFO: Backup completed server2.log line 1 (overall line 4): ERROR: Network timeout server2.log line 2 (overall line 5): INFO: Service restarted server2.log line 3 (overall line 6): ERROR: Disk space low Field Manipulation: Changing Your DataModifying Existing FieldsApply a 10% discount to all prices: awk -F, 'BEGIN {OFS=","} {$4 = $4 * 0.9; print}' inventory.csv What it does: Multiplies the price field (column 4) by 0.9 and rebuilds the line with commas. Output: laptop,Dell,XPS13,1169.991,5 desktop,HP,Pavilion,809.55,3 tablet,Apple,iPad,539.1,8 monitor,Samsung,27inch,314.991,12 keyboard,Logitech,MX Keys,89.991,15 Adding New FieldsCalculate total inventory value: awk -F, 'BEGIN {OFS=","} { total_value = $4 * $5 print $0, total_value }' inventory.csv What it does: Multiplies price by quantity and adds the result as a new field. Output: laptop,Dell,XPS13,1299.99,5,6499.95 desktop,HP,Pavilion,899.50,3,2698.5 tablet,Apple,iPad,599.00,8,4792 monitor,Samsung,27inch,349.99,12,4199.88 keyboard,Logitech,MX Keys,99.99,15,1499.85 Working with Multi-Character DelimitersCreate complex_log.txt: 2024-06-29::10:15:22::INFO::System started successfully 2024-06-29::10:16:45::ERROR::Database connection timeout 2024-06-29::10:17:10::WARN::Low disk space detected 2024-06-29::10:18:33::ERROR::Network unreachable Parse double-colon separated data: awk -F'::' '{print $1, $2, $3 ":", $4}' complex_log.txt What it does: Uses double-colon as field separator to create readable timestamp and message format. Output: 2024-06-29 10:15:22 INFO: System started successfully 2024-06-29 10:16:45 ERROR: Database connection timeout 2024-06-29 10:17:10 WARN: Low disk space detected 2024-06-29 10:18:33 ERROR: Network unreachable 🪧 Time to recall You now have powerful tools for data manipulation: FS/OFS: Control how you split input and join outputRS/ORS: Define what constitutes recordsNR/FNR: Track line numbers globally and per-fileNF: Count fields and access the last columnFILENAME: Identify source filesThese variables work together to give you complete control over how AWK processes your data. Practice ExercisesTry these exercises with the sample files I've provided: Convert the access.log to CSV format with just IP, user, and statusAdd a 10% discount to the items in inventory.csvFind all inventory items with quantity less than 10Add a new field in inventory.csv that shows inventory value by multiplying sock with pricingAdd line numbers only to ERROR entries in the server logsCalculate the average price of all inventory itemsProcess the variable_fields.txt file and show only lines with exactly 3 fieldsIn the next chapter, you'll learn mathematical operations and string functions that will turn AWK into your personal calculator and text processor!
-
Chapter 2: Pattern Matching and Basic Operations
by: Abhishek Prakash Fri, 11 Jul 2025 17:35:12 +0530 Think of AWK patterns like a security guard at a nightclub - they decide which lines get past the velvet rope and which actions get executed. Master pattern matching, and you control exactly what AWK processes. Pattern matching fundamentalsAWK patterns work like filters: they test each line and execute actions only when conditions are met. No match = no action. Here are some very basic examples of pattern matching: awk '/ERROR/ {print $0}' logfile.txt # Find lines containing "ERROR" awk '/^root/ {print $1}' /etc/passwd # Lines starting with "root" awk '/ssh$/ {print NR}' processes.txt # Lines ending with "ssh" awk '/^ERROR$/ {print}' logfile.txt # Lines containing only ERROR Regular expressions in AWK use the same syntax as grep and sed. The pattern sits between forward slashes /pattern/. You must have some basic understanding of regex to use the pattern matching. Tip: Instead of multiple AWK calls: awk '/ERROR/ file && awk '/WARNING/ file Use one call with OR: awk '/ERROR|WARNING/ {print}' file💡I advise creating the data files and trying all the commands on your system. This will give you a lot better understanding of concepts than just reading the text and mentioned outputs.Conditional operations: making decisions with if-elseAWK's if-else statements work like traffic lights - they direct program flow based on conditions. Create this file as performance.txt: server01 cpu 75 memory 60 disk 45 server02 cpu 45 memory 30 disk 85 server03 cpu 95 memory 85 disk 70 server04 cpu 25 memory 40 disk 20 server05 cpu 65 memory 75 disk 90 And we shall see how you can use if-else to print output that matches a certain pattern. Simple if statement: Binary decisionsThink of if like a bouncer - one condition, one action. Let's use this command with the performance.txt we created previously: awk '{if ($3 > 80) print $1, "CPU ALERT"}' performance.txt It will show the lines that have CPU usage ($3=3rd column) greater than 80 but print the server name ($1=first column). server03 CPU ALERT Simple if statementOnly server03 exceeds the 80% CPU threshold, so only it 'triggers the alert'. if-else structure: Either-or logicThink of if-else like a fork in the road - two paths, always take one. Let's label the servers based on the disk usage. awk '{ if ($5 > 70) print $1, "HIGH DISK" else print $1, "DISK OK" }' performance.txt Output: server01 DISK OK server02 HIGH DISK server03 DISK OK server04 DISK OK server05 HIGH DISK if-else structureEvery server gets classified - no line escapes without a label. 📋The multi-line AWK command can be copy-pasted as it is in the terminal and it should run fine. While it all is just one line, it is easier to understand when written across lines. When you are using it inside bash scripts, always use multiple lines.if-else if-else chain: Multi-tier classificationThink of nested conditions like a sorting machine - items flow through multiple gates until they find their category. awk '{ if ($5 > 80) status = "CRITICAL" else if ($5 > 60) status = "WARNING" else if ($5 > 40) status = "MODERATE" else status = "OK" print $1, "disk:", $5"%", status }' performance.txt Output: server01 disk: 45% MODERATE server02 disk: 85% CRITICAL server03 disk: 70% WARNING server04 disk: 20% OK server05 disk: 90% CRITICAL if-else if-else chainEach server cascades through conditions until it hits its classification tier. Complex multi-field analysisLet's make it a bit more complicated by combining CPU, memory, and disk metrics and create a monitoring script: awk '{ cpu = $3; mem = $5; disk = $7 if (cpu > 90 || mem > 90 || disk > 90) alert = "CRITICAL" else if (cpu > 70 && mem > 70) alert = "HIGH_LOAD" else if (cpu > 80 || mem > 80 || disk > 80) alert = "WARNING" else alert = "NORMAL" printf "%-10s CPU:%2d%% MEM:%2d%% DISK:%2d%% [%s]\n", $1, cpu, mem, disk, alert }' performance.txt It should show this output. server01 CPU:75% MEM:60% DISK:45% [NORMAL] server02 CPU:45% MEM:30% DISK:85% [WARNING] server03 CPU:95% MEM:85% DISK:70% [CRITICAL] server04 CPU:25% MEM:40% DISK:20% [NORMAL] server05 CPU:65% MEM:75% DISK:90% [CRITICAL] Complex multi-field analysisThis tiered approach mimics real monitoring systems - critical issues trump warnings, combined load factors matter. You can combine it with proc data and cron to convert it into an actual system resource alert system. Comparison operatorsAWK comparison operators work like mathematical symbols - they return true and false for conditions. This gives you greater control to put login in place. We will use the following data files for our testing in this section. A server_stats.txt file that has the hostname, cpu_cores, memory_mb, cpu_usage, status fields. web01 8 4096 75 online web02 4 2048 45 maintenance db01 16 8192 90 online db02 8 4096 65 online cache01 2 1024 0 offline backup01 4 2048 100 online And a network_ports.txt file that has ip, service, port, protocol and state fields. 192.168.1.10 ssh 22 tcp open 192.168.1.10 http 80 tcp open 192.168.1.15 ftp 21 tcp closed 192.168.1.20 mysql 3306 tcp open 192.168.1.25 custom 8080 tcp open 192.168.1.30 ssh 22 tcp open Numeric comparisonsNumeric comparisons are simple. You use the regular <,>,= etc symbols for comparing numbers. Greater than - Like checking if servers exceed CPU thresholds: awk '$4 > 70 {print $1, "high CPU:", $4"%"}' server_stats.txt Output: web01 high CPU: 75% db01 high CPU: 90% backup01 high CPU: 100% Numeric Comparisons: Greater than.Less than or equal - Like finding servers with limited resources: awk '$2 <= 4 {print $1, "low core count:", $2}' server_stats.txt Output: web02 low core count: 4 cache01 low core count: 2 backup01 low core count: 4 Less than or equalEquals - Like finding servers with zero usage (probably offline): awk '$4 == 0 {print $1, "zero CPU usage"}' server_stats.txt Output: cache01 zero CPU usage Numeric Comparison: EqualsNot equals - Like finding non-standard ports: awk '$3 != 22 && $3 != 80 && $3 != 443 {print $1, "unusual port:", $3}' network_ports.txt Output: 192.168.1.15 unusual port: 21 192.168.1.20 unusual port: 3306 192.168.1.25 unusual port: 8080 Not equalsString comparisonsYou have different operators for comparing strings. They are quite easy to use and understand. Exact string match (==) Let's check servers with running status: awk '$5 == "online" {print $1, "is running"}' server_stats.txt Output: web01 is running db01 is running db02 is running backup01 is running Exact String MatchPattern match (~) Let's find ports that are running a database like sql: awk '$2 ~ /sql/ {print $1, "database service on port", $3}' network_ports.txt Output: 192.168.1.20 database service on port 3306 Pattern MatchDoes NOT match (!~): When you want to exclude the matches. For example, echo -e "# This is a comment\nweb01 active\n# Another comment\ndb01 running" | awk '$1 !~ /^#/ {print "Valid config:", $0}' The output will omit lines starting with #: Valid config: web01 active Valid config: db01 runningString Comparisons: Does not Match💡The ~ operator is like a smart search - it finds patterns within strings, not exact matches.Logical operators: &&, || and !Logical operators work like sentence connectors - they join multiple conditions into complex tests. You'll be using them as well to add complex logic to your scripts. Here's the test file process_list.txt for this section: nginx 1234 www-data 50 2.5 running mysql 5678 mysql 200 8.1 running apache 9012 www-data 30 1.2 stopped redis 3456 redis 15 0.8 running postgres 7890 postgres 150 5.5 running sshd 2468 root 5 0.1 running It has process, pid, user, memory_mb, cpu_percent and status fields. AND Operator (&&) - Both conditions must be trueLet's find processes that are both memory AND CPU hungry in our input file. Let's filter the lines that have RAM more than 100 and CPU usage greater than 5. awk '$4 > 100 && $5 > 5.0 {print $1, "resource hog:", $4"MB", $5"%"}' process_list.txt Here's the output: mysql resource hog: 200MB 8.1% postgres resource hog: 150MB 5.5% And OperatorOR Operator (||) - Either condition can be trueThe command filters out important services like mysql or postgres or services with CPU usage greater than 7: awk '$1 == "mysql" || $1 == "postgres" || $5 > 7.0 {print $1, "needs attention"}' process_list.txt Output: mysql needs attention postgres needs attention Or OperatorNOT Operator (!) - Reverses the conditionLet's find all the services that are not active in our test file: awk '!($6 == "running") {print $1, "not running, status:", $6}' process_list.txt Here's the output: apache not running, status: stopped Not OperatorComplex combined LogicYou can combine them to test multiple criteria. Try and figure out what this command does: awk '($3 == "root" && $4 > 10) || ($5 > 2.0 && $6 == "running") { printf "Monitor: %-10s User:%-8s Mem:%3dMB CPU:%.1f%%\n", $1, $3, $4, $5 }' process_list.txt The output should help you understand it: Monitor: nginx User:www-data Mem: 50MB CPU:2.5% Monitor: mysql User:mysql Mem:200MB CPU:8.1% Monitor: postgres User:postgres Mem:150MB CPU:5.5%Complex Combined LogicPractical examples for system administratorsNow, let's see some real-world scenarios where you can use these operators. It will also have some elements from the previous chapters. Example 1: Failed SSH login analysisFind failed SSH attempts with IP addresses. Please note that this may not output anything if you are on a personal system that does not accept SSH connections. awk '/Failed password/ && /ssh/ { for(i=1; i<=NF; i++) if($i ~ /^[0-9]+\.[0-9]+/) print $1, $2, $i }' /var/log/auth.log Example 2: Process memory monitoringLet's create a script that will display processes with high memory consumption at the time when the script was run. ps aux | awk 'NR > 1 { if ($4 > 5.0) printf "HIGH MEM: %s using %.1f%%\n", $11, $4 else if ($4 > 2.0) printf "MEDIUM: %s using %.1f%%\n", $11, $4 }' There are better ways to monitor and setting up alert system though. Example 3: Disk space alertsCheck for filesystems with over 80% full space. df -h | awk 'NR > 1 { gsub(/%/, "", $5) # Remove % symbol if ($5 > 80) printf "WARNING: %s is %s%% full\n", $6, $5 }' Example 4: Log level filteringFilter logs based on the severity levels. This is a dummy example, as you'll need some services running that have these logs level. awk '{ if ($3 ~ /ERROR|FATAL/) print "CRITICAL:", $0 else if ($3 ~ /WARNING|WARN/) print "ATTENTION:", $0 else if ($3 ~ /INFO/) print "INFO:", $4, $5, $6 }' application.log Example 5: Network connection analysisAnalyze netstat output for suspicious connections: netstat -an | awk ' $1 ~ /tcp/ && $6 == "ESTABLISHED" { if ($4 !~ /:22$|:80$|:443$/) print "Unusual connection:", $4, "->", $5 } ' Works better on servers. 💡It is more precise to search in fields than searching entire line, when it is suited. For example, if you are looking for username that starts with adm, use awk '$1 ~ /^adm/ {print}' /etc/passwd as you know only the first field consists of usernames.🪧 Time to recall In this chapter, you've learned: Patterns filter which lines get processedComparison operators test numeric and string conditionsLogical operators combine multiple testsRegular expressions provide flexible string matchingField-specific patterns offer precision controlPractice ExercisesFind all users with UID greater than 1000 in /etc/passwdExtract error and warning messages from a log file (if you have one)Show processes using more than 50% CPU from ps aux outputFind /etc/ssh/sshd_config configuration lines that aren't commentsIdentify network connections on non-standard ports (see one of the examples below for reference)In the next chapter, learn about built-in variables and field manipulation - where AWK transforms from a simple filter into a data processing powerhouse.
-
Chapter 1: Introduction to AWK
by: Abhishek Prakash Fri, 11 Jul 2025 17:33:37 +0530 If you're a Linux system administrator, you've probably encountered situations where you need to extract specific information from log files, process command output, or manipulate text data. While tools like grep and sed are useful, there's another much more powerful tool in your arsenal that can handle complex text processing tasks with remarkable ease: AWK. What is AWK and why should You care about it?AWK is not just a UNIX command, it is a powerful programming language designed for pattern scanning and data extraction. Named after its creators (Aho, Weinberger, and Kernighan), AWK excels at processing structured text data, making it invaluable for system administrators who regularly work with log files, configuration files, and command output. Here's why AWK should be in your and every sysadmin's toolkit: Built for text processing: AWK automatically splits input into fields and records, making column-based data trivial to work with.Pattern-action paradigm: You can easily define what to look for (pattern) and what to do when found (action).No compilation needed: Unlike most other programming languages, AWK scripts do not need to be compiled first. AWK scripts run directly and thus making them perfect for quick one-liners and shell integration.Handles complex logic: Unlike simple Linux commands, AWK supports variables, arrays, functions, and control structures.Available everywhere: Available on virtually every Unix-like system and all Linux distros by default.AWK vs sed vs grep: When to use which toolgrep, sed and awk all three deal with data processing and that may make you wonder whether you should use sed or grep or AWK. In my opinion, you should Use grep when: You need to find lines matching a patternSimple text searching and filteringBinary yes/no decisions about line content# Find all SSH login attempts grep "ssh" /var/log/auth.log Use sed when: You need to perform find-and-replace operationsSimple text transformationsStream editing with minimal logic# Replace all occurrences of "old" with "new" sed 's/old/new/g' file.txt Use AWK when: You need to work with columnar dataComplex pattern matching with actionsMathematical operations on dataMultiple conditions and logic branchesGenerating reports or summaries# Print username and home directory from /etc/passwd awk -F: '{print $1, $6}' /etc/passwd Now that you are a bit more clear about when to use AWK, let's see the basics of AWK command structure. Basic AWK syntax and structureAWK follows a simple but powerful syntax: awk 'pattern { action }' file Pattern: Defines when the action should be executed (optional)Action: What to do when the pattern matches (optional)File: Input file to process (can also read from stdin)💡If you omit the pattern, the action applies to every line. If you omit the action, matching lines are printed (like grep).Let's get started with using AWK for some simple but interesting use cases. Your first AWK Command: Printing specific columnsLet's start with a practical example. Suppose you want to see all users in Linux and their home directories from /etc/passwd file: awk -F: '{print $1, $6}' /etc/passwd The output should be something like this: See all usersroot /root daemon /usr/sbin bin /bin sys /dev sync /bin games /usr/games ... Let's break this doww: -F: sets the field separator to colon (since /etc/passwd uses colons)$1 refers to the first field (username)$6 refers to the sixth field (home directory)print outputs the specified fieldsUnderstanding AWK's automatic field splitting and AWK automatically splits each input line into fields based on whitespace (i.e. spaces and tabs) by default. Each field is accessible through variables: $0 - The entire line$1 - First field$2 - Second field$NF - Last field (NF = Number of Fields)$(NF-1) - Second to last fieldLet's see it in action by extracting process information from the ps command output. ps aux | awk '{print $1, $2, $NF}' This prints the user, process ID, and command for each running process. Process ID and CommandNF is one of the several built-in variables Know built-in variables: Your data processing toolkitAWK provides several built-in variables that make text processing easier. AWK built-in variablesFS (Field separator)By default, AWK uses white space, i.e. tabs and spaces as field separator. With FS, you can define other field separators in your input. For example, /etc/passwd file contains lines that have values separated by colon : so if you define file separator as : and extract the first column, you'll get the list of the users on the system. awk -F: '{print $1}' /etc/passwd Field SeparatorYou'll get the same result with the following command. awk 'BEGIN {FS=":"} {print $1}' /etc/passwdMore about BEGIN in later part of this AWK series. NR (Number of records/lines)NR keeps track of the current line number. It is helpful when you have to take actions for certain lines in a file. For example, the command below will output the content of the /etc/passwd file but with line numbers attached at the beginning. awk '{print NR, $0}' /etc/passwd Print with line numberNF (Number of fields)NF contains the number of fields in the current record. Which is basically number of columns when separated by FS. For example, the command below will print exactly 5 fields in each line. awk -F: 'NF == 5 {print $0}' /etc/passwd Number of FieldsPractical examples for system administratorsLet's see some practical use cases where you can utilize the power of AWK. Example 1: Analyzing disk usageThe command below shows disk usage percentages and mount points, sorted by usage where as skipping the header line with NR > 1. df -h | awk 'NR > 1 {print $5, $6}' | sort -nr Analyze Disk usageExample 2: Finding large filesI know that find command is more popular for this but you can also use AWK to print file sizes and names for files larger than 1MB. ls -l | awk '$5 > 1000000 {print $5, $NF}' Finding Large filesExample 3: Processing log files🚧The command below will not work if your system uses systemd. Distros with systemd use journal logs, not syslog.awk '/ERROR/ {print $1, $2, $NF}' /var/log/syslog This extracts timestamp and error message from log files containing "ERROR". Example 4: Memory usage summaryUse AWK to calculate and display memory usage percentage. free -m | awk 'NR==2 {printf "Memory Usage: %.1f%%\n", $3/$2 * 100}' Memory Usage SummaryPattern matching in AWK works like a smart bouncer - it evaluates conditions and controls access to actions. Master these concepts:It is slightly complicated than the other examples, so let me break it down for you. Typical free command output looks like this: total used free shared buff/cache available Mem: 7974 3052 723 321 4199 4280 Swap: 2048 10 2038 With NR==2 we only take the second line from the above output. $2 (second column) gives total memory and $3 gives used memory. Next, in printf "Memory Usage: %.1f%%\n", $3/$2 * 100 , the printf command prints formatted output, %.1f%% shows one decimal place, and a % symbol and $3/$2 * 100 calculates memory used as a percentage. So in the example above, you get the output as Memory Usage: 38.3% where 3052 is ~38.3% of 7974. You'll learn more on arithmetical operation with AWK later in this series. 🪧 Time to recall In this introduction, you've learned: AWK is a pattern-action language perfect for structured text processingIt automatically splits input into fields, making columnar data easy to work withBuilt-in variables like NR, NF, and FS provide powerful processing capabilitiesAWK excels where grep and sed fall short: complex data extraction and manipulationAWK's real power becomes apparent when you need to process data that requires logic, calculations, or complex pattern matching. In the next part of this series, we'll dive deeper into pattern matching and conditional operations that will transform how you handle text processing tasks. 🏋️ Practice exercisesTry these exercises to reinforce what you've learned: Display only usernames from /etc/passwdShow the last field of each line in any filePrint line numbers along with lines containing "root" in /etc/passwdExtract process names and their memory usage from ps aux outputCount the number of fields in each line of a fileThe solutions involve combining the concepts we've covered: field variables, built-in variables, and basic pattern matching. In the next tutorial, we'll explore these patterns in much more detail.
-
401: Outgoing Email
by: Chris Coyier Thu, 10 Jul 2025 11:04:57 +0000 Hi! We’re back! Weird right? It’s been over 2 years. We took a break after episode 400, not because we ran out of things to talk about, but because we were so focused on our CodePen 2.0 work, it got old not being able to discuss it yet. We’ll be talking plenty about that going forward. But CodePen has a ton of moving parts, so we’ll be talking about all of it. This week we’ll be kicking off the podcast again talking about a huge and vital bit of CodePen infastructure: our email system. Outgoing email, that is. We get plenty of incoming email from y’all as well, but this is about the app itself sending email. Timeline 00:06 We’re back! 01:22 Our transactional email system 05:21 Templating in Postmark 08:31 Hitting APIs to send emails 10:23 Building a sponsored email 17:20 Marie’s Monday morning routine 24:19 Analytics and metrics 26:55 Dealing with large images 30:12 MGML framework for email Links Postmark SparkPost Cloudflare R2 MJML BuySellAds
-
FOSS Weekly #25.28: Xfce Customization, CoMaps, Disk Space Clean-up, Deprecated Commands and More
by: Abhishek Prakash Thu, 10 Jul 2025 04:58:54 GMT After Denmark, now the news is that French city Lyon is ditching Microsoft to set up a collaborative office with a few open source software. Now that calls for a 'fest for the luminieries' 😉 French City of Lyon Kicks Out MicrosoftMicrosoft faces growing rejection in Europe whereas open source software sees growing adaption.It's FOSS NewsSourav Rudra💬 Let's see what else you get in this edition A new Google Maps alternative.LibreOffice working on a long-requested feature.Twitter's original co-founder launching an open source project.A new app for transferring files between your Linux system and Android device.And other Linux news, tips, and, of course, memes!📰 Linux and Open Source NewsCoMaps has been officially launched, and its looking good.bitchat is Jack Dorsey's new open source venture.I am happy that LibreOffice is finally working on bringing Markdown support.HDR support is coming to the Linux version of Blender soon.Weeks after weeks, Fedora comes up with its plan to drop support for aging technologies. This time, Fedora wants to drop UEFI boot support on MBR. Another Radical Move as Fedora Now Wants to Drop UEFI Boot Support on MBRUEFI boot support for MBR could be removed in Fedora 43.It's FOSS NewsSourav Rudra🧠 What We’re Thinking AboutAI has reduced the cost of adding new software to almost zero but the price of understanding, testing, and trusting that code is higher than ever. Writing Code Was Never The BottleneckLLMs make it easier to write code, but understanding, reviewing, and maintaining it still takes time, trust, and good judgment.ordep.devPedro Tavares🧮 Linux Tips, Tutorials, and MoreStart avoiding these deprecated Linux commands!I faced this 'failed to synchronize all databases' error in Pacman on the weekend and shared a quick fix for it.This is my favorite article that explains the concept of Linux distributions in an easy to understand way by using an interesting anology. I have shared it a few times in the past newsletters too.Here's another explainer on sources.list concept in Ubuntu. Although, I have to update it as things have changed with Ubuntu 24.04.A few tips on freeing up disk space on Ubuntu and Linux Mint distros. 7 Simple Ways to Free Up Space on Ubuntu and Linux MintRunning out of space on your Linux system? Here are several ways you can clean up your system to free up space on Ubuntu and other Ubuntu based Linux distributions.It's FOSSAbhishek Prakash Desktop Linux is mostly neglected by the industry but loved by the community. For the past 13 years, It's FOSS has been helping people use Linux on their personal computers. And we are now facing the existential threat from AI models stealing our content. If you like what we do and would love to support our work, please become It's FOSS Plus member. It costs $24 a year (less than the cost of a burger meal each month) and you get an ad-free reading experience with the satisfaction of helping the desktop Linux community. Join It's FOSS Plus 👷 Homelab and Hardware CornerIt is important to keep a tab on the CPU temperature on your Raspberry Pi. Monitor CPU & GPU Temperature in Raspberry Pi [CLI and GUI]Here’s how to keep an eye on the CPU and GPU temperature of your Raspberry Pi in both GUI and command line.It's FOSSAbhishek PrakashSpotted this new GaN charging station on Kickstarter. The real time visuals on power output is excellent for people who like to keep track of data on all things possible. ✨ Project HighlightPacket offers a convenient method for wirelessly transferring files between Linux and Android and vice versa. Packet is the Linux App You Didn’t Know You Needed for Fast Android File TransfersSimple, fast file sharing between Linux and Android.It's FOSSSourav Rudra📽️ Videos I am Creating for YouYou won't believe Xfce can look this beautiful. Detailed Xfce customization video is the latest on our YouTube channel. Subscribe to It's FOSS YouTube Channel🧩 Quiz TimeCan you beat this command-themed memory challenge? Memory Match Commands and Their CategoriesAn enjoyable way to test your memory by matching the Linux command with their respective categories.It's FOSSAbhishek Prakash💡 Quick Handy TipIn Xfce, you can minimize all other windows except the current one. For doing this, right-click on the titlebar and then select "Minimize Other Windows". 🤣 Meme of the Week🗓️ Tech TriviaOn July 8, 1946, the Moore School of Electrical Engineering at the University of Pennsylvania hosted the first-ever formal lecture series on electronic digital computers. These influential lectures on computer design directly inspired the development of some of the world’s earliest stored-program computers, including the groundbreaking EDSAC. 🧑🤝🧑 FOSSverse CornerCan you help this FOSSer pick a GPU that works well with Kubuntu 24.04? Graphic cards that work on Kubuntu 24.04Hi, I have on my system Kubutu 24.04. Would like to get a new graphic card, as the one currently using is very old. Any suggestions ? Currently using Kubuntu 24.04 on a MSI B550 Gaming mb, Gen3. 32 gigs of ram, 12 × AMD Ryzen 5 5500 CPU. System: Kernel: 6.8.0-62-generic arch: x86_64 bits: 64 compiler: gcc v: 13.3.0 Desktop: KDE Plasma v: 5.27.12 Distro: Kubuntu 24.04.2 LTS (Noble Numbat) base: Ubuntu Machine: Type: Desktop Mobo: Micro-Star model: B550 GAMING GEN3 (MS-7B86) v: 5.0 serial:…It's FOSS Communityphlag311❤️ With lovePlease share it with your Linux-using friends and encourage them to subscribe (hint: it's here). Share the articles in Linux Subreddits and community forums. Follow us on Google News and stay updated in your News feed. Opt for It's FOSS Plus membership and support us 🙏 Enjoy FOSS 😄
-
Packet is the Linux App You Didn’t Know You Needed for Fast Android File Transfers
by: Sourav Rudra Mon, 07 Jul 2025 13:13:17 GMT Most file sharing today takes place through cloud services, but that's not always necessary. Local file transfers are still relevant, letting people send files directly between devices on the same network without involving a nosy middleman (a server, in this case). Instead of uploading confidential documents on WhatsApp and calling it a day, people could share them directly over their local network. This approach is faster, more private, and more reliable than relying on a third-party server. Remember, if you value your data, so does Meta. 🕵️♂️ That’s where Packet comes in, offering an easy, secure way to transfer files directly between Linux and Android devices. Wireless File Transfers via Quick ShareIt is a lightweight, open source app for Linux that makes transferring files effortless. It leverages a partial implementation of Google's Quick Share protocol (proprietary) to enable easy wireless transfers over your local Wi-Fi network (via mDNS) without needing any cables or cloud servers. In addition to that, Packet supports device discovery via Bluetooth, making it easy to find nearby devices without manual setup. It can also be integrated with GNOME’s Nautilus file manager (Files), allowing you to send files directly from your desktop with a simple right-click (requires additional configuration). ⭐ Key Features Quick Share SupportLocal, Private TransfersFile Transfer NotificationsNautilus Integration for GNOMEHow to Send Files Using Packet?First things first, you have to download and install the latest release of Packet from Flathub by running this command in your terminal: flatpak install flathub io.github.nozwock.PacketOnce launched, sending files from your Linux computer to your Android smartphone is straightforward. Enable Bluetooth on your laptop/computer, then click on the big blue "Add Files" button and select the files you want to send. Adding new files for transfer to Packet is easy. You can also drag and drop files directly into Packet for a quicker sharing experience. If you are looking to transfer a whole folder, it’s best to first compress it into an archive like a TAR or ZIP, then send it through Packet for transmission. Once you are done choosing files, choose your Android phone from the recipients list and verify the code shown on screen. File transfers from Linux to Android are lightning fast! Though, before you do all that, ensure that Quick Share is set up on your smartphone to allow Nearby sharing with everyone. Additionally, take note of your device’s name; this is how it will appear on your Linux machine when sending/receiving files. When you start the transfer, your smartphone will prompt you to "Accept" or "Decline" the Quick Share request. Only proceed if the PIN or code shown on both devices matches to ensure a secure transfer. Transferring files the other way around, from Android to Linux, is just as simple. On your Android device, select the files you want to share, tap the "Share" button, and choose "Quick Share". Your Linux computer should appear in the list if Packet is running and your device is discoverable. File transfers from Android to Linux are the same! You can change your Linux device’s name from the "Preferences" menu in Packet (accessible via the hamburger menu). This is the name that will show up on your Android device when sharing files. Packet also shows handy system notifications for file transfers, so you don’t miss a thing. Packet shows helpful notifications and lets you change a few basic settings. If you use the GNOME Files app (Nautilus), then there’s an optional plugin that adds a "Send with Packet" option to the right-click menu, making it even easier to share files without opening the app manually. Overall, Packet feels like a practical tool for local file sharing between devices. It works well across Android and Linux devices, and can do the same for two Linux devices on the same network. And, I must say, it gives tough competition to LocalSend, another file transfer tool that’s an AirDrop alternative for Linux users! Suggested Read 📖 LocalSend: An Open-Source AirDrop Alternative For Everyone!It’s time to ditch platform-specific solutions like AirDrop!It's FOSS NewsSourav Rudra
-
Better CSS Shapes Using shape() — Part 4: Close and Move
by: Temani Afif Mon, 07 Jul 2025 12:48:29 +0000 This is the fourth post in a series about the new CSS shape() function. So far, we’ve covered the most common commands you will use to draw various shapes, including lines, arcs, and curves. This time, I want to introduce you to two more commands: close and move. They’re fairly simple in practice, and I think you will rarely use them, but they are incredibly useful when you need them. Better CSS Shapes Using shape() Lines and Arcs More on Arcs Curves Close and Move (you are here!) The close command In the first part, we said that shape() always starts with a from command to define the first starting point but what about the end? It should end with a close command. That’s true. I never did because I either “close” the shape myself or rely on the browser to “close” it for me. Said like that, it’s a bit confusing, but let’s take a simple example to better understand: clip-path: shape(from 0 0, line to 100% 0, line to 100% 100%) If you try this code, you will get a triangle shape, but if you look closely, you will notice that we have only two line commands whereas, to draw a triangle, we need a total of three lines. The last line between 100% 100% and 0 0 is implicit, and that’s the part where the browser is closing the shape for me without having to explicitly use a close command. I could have written the following: clip-path: shape(from 0 0, line to 100% 0, line to 100% 100%, close) Or instead, define the last line by myself: clip-path: shape(from 0 0, line to 100% 0, line to 100% 100%, line to 0 0) But since the browser is able to close the shape alone, there is no need to add that last line command nor do we need to explicitly add the close command. This might lead you to think that the close command is useless, right? It’s true in most cases (after all, I have written three articles about shape() without using it), but it’s important to know about it and what it does. In some particular cases, it can be useful, especially if used in the middle of a shape. CodePen Embed Fallback In this example, my starting point is the center and the logic of the shape is to draw four triangles. In the process, I need to get back to the center each time. So, instead of writing line to center, I simply write close and the browser will automatically get back to the initial point! Intuitively, we should write the following: clip-path: shape( from center, line to 20% 0, hline by 60%, line to center, /* triangle 1 */ line to 100% 20%, vline by 60%, line to center, /* triangle 2 */ line to 20% 100%, hline by 60%, line to center, /* triangle 3 */ line to 0 20%, vline by 60% /* triangle 4 */ ) But we can optimize it a little and simply do this instead: clip-path: shape( from center, line to 20% 0, hline by 60%, close, line to 100% 20%, vline by 60%, close, line to 20% 100%, hline by 60%, close, line to 0 20%, vline by 60% ) We write less code, sure, but another important thing is that if I update the center value with another position, the close command will follow that position. CodePen Embed Fallback Don’t forget about this trick. It can help you optimize a lot of shapes by writing less code. The move command Let’s turn our attention to another shape() command you may rarely use, but can be incredibly useful in certain situations: the move command. Most times when we need to draw a shape, it’s actually one continuous shape. But it may happen that our shape is composed of different parts not linked together. In these situations, the move command is what you will need. Let’s take an example, similar to the previous one, but this time the triangles don’t touch each other: CodePen Embed Fallback Intuitively, we may think we need four separate elements, with its own shape() definition. But the that example is a single shape! The trick is to draw the first triangle, then “move” somewhere else to draw the next one, and so on. The move command is similar to the from command but we use it in the middle of shape(). clip-path: shape( from 50% 40%, line to 20% 0, hline by 60%, close, /* triangle 1 */ move to 60% 50%, line to 100% 20%, vline by 60%, close, /* triangle 2 */ move to 50% 60%, line to 20% 100%, hline by 60%, close, /* triangle 3 */ move to 40% 50%, line to 0 20%, vline by 60% /* triangle 4 */ ) After drawing the first triangle, we “close” it and “move” to a new point to draw the next triangle. We can have multiple shapes using a single shape() definition. A more generic code will look like the below: clip-path: shape( from X1 Y1, ..., close, /* shape 1 */ move to X2 Y2, ..., close, /* shape 2 */ ... move to Xn Yn, ... /* shape N */ ) The close commands before the move commands aren’t mandatory, so the code can be simplified to this: clip-path: shape( from X1 Y1, ..., /* shape 1 */ move to X2 Y2, ..., /* shape 2 */ ... move to Xn Yn, ... /* shape N */ ) CodePen Embed Fallback Let’s look at a few interesting use cases where this technique can be helpful. Cut-out shapes Previously, I shared a trick on how to create cut-out shapes using clip-path: polygon(). Starting from any kind of polygon, we can easily invert it to get its cut-out version: CodePen Embed Fallback We can do the same using shape(). The idea is to have an intersection between the main shape and the rectangle shape that fits the element boundaries. We need two shapes, hence the need for the move command. The code is as follows: .shape { clip-path: shape(from ...., move to 0 0, hline to 100%, vline to 100%, hline to 0); } You start by creating your main shape and then you “move” to 0 0 and you create the rectangle shape (Remember, It’s the first shape we create in the first part of this series). We can even go further and introduce a CSS variable to easily switch between the normal shape and the inverted one. .shape { clip-path: shape(from .... var(--i,)); } .invert { --i:,move to 0 0, hline to 100%, vline to 100%, hline to 0; } By default, --i is not defined so var(--i,)will be empty and we get the main shape. If we define the variable with the rectangle shape, we get the inverted version. Here is an example using a rounded hexagon shape: CodePen Embed Fallback In reality, the code should be as follows: .shape { clip-path: shape(evenodd from .... var(--i,)); } .invert { --i:,move to 0 0, hline to 100%, vline to 100%, hline to 0; } Notice the evenodd I am adding at the beginning of shape(). I won’t bother you with a detailed explanation on what it does but in some cases, the inverted shape is not visible and the fix is to add evenodd at the beginning. You can check the MDN page for more details. Another improvement we can do is to add a variable to control the space around the shape. Let’s suppose you want to make the hexagon shape of the previous example smaller. It‘s tedious to update the code of the hexagon but it’s easier to update the code of the rectangle shape. .shape { clip-path: shape(evenodd from ... var(--i,)) content-box; } .invert { --d: 20px; padding: var(--d); --i: ,move to calc(-1*var(--d)) calc(-1*var(--d)), hline to calc(100% + var(--d)), vline to calc(100% + var(--d)), hline to calc(-1*var(--d)); } We first update the reference box of the shape to be content-box. Then we add some padding which will logically reduce the area of the shape since it will no longer include the padding (nor the border). The padding is excluded (invisible) by default and here comes the trick where we update the rectangle shape to re-include the padding. That is why the --i variable is so verbose. It uses the value of the padding to extend the rectangle area and cover the whole element as if we didn’t have content-box. CodePen Embed Fallback Not only you can easily invert any kind of shape, but you can also control the space around it! Here is another demo using the CSS-Tricks logo to illustrate how easy the method is: CodePen Embed Fallback This exact same example is available in my SVG-to-CSS converter, providing you with the shape() code without having to do all of the math. Repetitive shapes Another interesting use case of the move command is when we need to repeat the same shape multiple times. Do you remember the difference between the by and the to directives? The by directive allows us to define relative coordinates considering the previous point. So, if we create our shape using only by, we can easily reuse the same code as many times as we want. Let’s start with a simple example of a circle shape: clip-path: shape(from X Y, arc by 0 -50px of 1%, arc by 0 50px of 1%) Starting from X Y, I draw a first arc moving upward by 50px, then I get back to X Y with another arc using the same offset, but downward. If you are a bit lost with the syntax, try reviewing Part 1 to refresh your memory about the arc command. How I drew the shape is not important. What is important is that whatever the value of X Y is, I will always get the same circle but in a different position. Do you see where I am going with this idea? If I want to add another circle, I simply repeat the same code with a different X Y. clip-path: shape( from X1 Y1, arc by 0 -50px of 1%, arc by 0 50px of 1%, move to X2 Y2, arc by 0 -50px of 1%, arc by 0 50px of 1% ) And since the code is the same, I can store the circle shape into a CSS variable and draw as many circles as I want: .shape { --sh:, arc by 0 -50px of 1%, arc by 0 50px of 1%; clip-path: shape( from X1 Y1 var(--sh), move to X2 Y2 var(--sh), ... move to Xn Yn var(--sh) ) } You don’t want a circle? Easy, you can update the --sh variable with any shape you want. Here is an example with three different shapes: CodePen Embed Fallback And guess what? You can invert the whole thing using the cut-out technique by adding the rectangle shape at the end: CodePen Embed Fallback This code is a perfect example of the shape() function’s power. We don’t have any code duplication and we can simply adjust the shape with CSS variables. This is something we are unable to achieve with the path() function because it doesn’t support variables. Conclusion That’s all for this fourth installment of our series on the CSS shape() function! We didn’t make any super complex shapes, but we learned how two simple commands can open a lot of possibilities of what can be done using shape(). Just for fun, here is one more demo recreating a classic three-dot loader using the last technique we covered. Notice how much further we could go, adding things like animation to the mix: CodePen Embed Fallback Better CSS Shapes Using shape() Lines and Arcs More on Arcs Curves Close and Move (you are here!) Better CSS Shapes Using shape() — Part 4: Close and Move originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
Fixing 'failed to synchronize all databases' Pacman Error in Arch Linux
by: Abhishek Prakash Sun, 06 Jul 2025 04:43:46 GMT I was trying to update my CachyOS system with in the usual Arch way when I encountered this 'failed to synchronize all databases' error. sudo pacman -Syu [sudo] password for abhishek: :: Synchronizing package databases... error: failed to synchronize all databases (unable to lock database) The fix was rather simple. It worked effortlessly for me and I hope it does for you, too. Handling failed to synchronize all databases errorCheck that no other program is using the pacman command: ps -aux | grep -i pacmanIf you see a single line output with grep --color=auto -i pacman at the end, it means that no program other than the grep command you just ran is using pacman. If you see some other programs, use their process ID to kill them first and then use this command to remove the lock from the database: sudo rm /var/lib/pacman/db.lckOnce done, you can run the pacman update again to see if things are working smooth or not. Here's a screenshot of the entire scenario on my CachyOS linux: That didn't work? Try thisIn some rare cases, just removing the database lock might not fix the issue. What you could try is to delete the entire database of the local cache. The next pacman update will take longer as it will download plenty, but it may fix your issue. sudo rm /var/lib/pacman/sync/*.*Reason why you see this 'unable to lock databases' errorFor the curious few who would like to know why they encountered this failed to synchronize all databases (unable to lock database) error, let me explain. Pacman commands are just one way to install or update packages on an Arch-based system. There could be Pamac or some other tool like KDE Discover with their respective PackageKit plugins or some other instances of pacman running in another terminal. Two processes trying to modify the system package database at the same time could be problematic. This is why the built-in security mechanism in Arch locks the database by creating the /var/lib/pacman/db.lck. This is an indication to let pacman know that some program is using the package database. Once the program finishes up successfully, this lock file is deleted automatically. In some cases, this lock file might not be deleted. For instance, when you turn off your system when pacman command was already running in a terminal. This is what happened in my case. I ran the pacman -Syu command and it was waiting for my Y to start installing the updates. I got distracted and forced turn the system off. On the next boot, I encountered this error when I tried updating the system. This is also the reason why you should check if some other program might be using pacman underneath. Force removing the lock file when there is an active program using the database is not a good idea. In some rare cases, the lock file removal alone won't fix the issue. You may have to delete the local database cache. This happens when the local database of package is corrupted. This is what I mentioned in the earlier section. Did it fix the issue for you?Now that you know the root cause of the issue and the ways of fixing it, let me know if the fix I shared with you here worked for you or not. If it did, drop a quick “Thank You”. That is a motivation booster. And if it didn't, I might try helping you further. The comment section is all yours.
-
LHB Linux Digest #25.16: AWK, History, YAML Validation, Mastering Top and More
by: Abhishek Prakash Fri, 04 Jul 2025 17:30:52 +0530 Is it too 'AWKward' to use AWK in the age of AI? I don't think so. AWK is so underrated despite being so powerful for creating useful automation scripts. We have had a very good intro to AWK and now I am working on a series that covers the basics of AWK, just like our Bash series. Hopefully, you'll see it in the next newsletter. Stay tuned 😊 This post is for subscribers only Subscribe now Already have an account? Sign in
-
Ansible 101: Install, Configure, and Automate Linux in Minutes
by: Adnan Shabbir Fri, 04 Jul 2025 05:43:38 +0000 In this technologically rich era, businesses deploy servers in no time and also manage hundreds of devices on the cloud. All this is possible with the assistance of Ansible-like automation engines. Ansible is an automation server that manages multiple remote hosts and can deploy applications, install packages, troubleshoot systems remotely, perform network automation, configuration management, and much more, all at once or one by one. In today’s guide, we’ll elaborate on the steps to install, configure, and automate Linux in minutes. This guide is broadly divided into 2 categories: Install and Configure Ansible → Practical demonstration of installing and configuring Ansible on Control Node. Ansible Playbooks | Automate Linux in Minutes → Creating Ansible Playbooks and implementing playbooks on managed nodes. Let’s have a look at the brief outline: Install and Configure Ansible | Control Node and Host Nodes Step 1: Install and Configure Ansible on the Control Node Step 2: Create an Inventory/Hosts File on the Control Node Step 3: Install and Configure SSH on the Host Nodes Step 4: Create an Ansible User for Remote Connections Step 5: Set up SSH Key | Generate and Copy Step 6: Test the Connection | Control Node to Host Nodes Ansible Playbooks | Automate Linux in Minutes YAML Basics Step 1: Create an Ansible Playbook Step 2: Automate the Tasks Conclusion Install and Configure the Ansible | Control Node and Host Nodes As already discussed, Ansible is the automation server that has the control node and some managed nodes to manage the overall server. In this section, we’ll demonstrate how you can install and configure Ansible to work properly. Prerequisites: Understanding the Basics | Control Node, Managed Nodes, Inventory File, Playbook Before proceeding to the real-time automation, let’s have a look at the list of components that we need to understand before proceeding: Control Node: The system where Ansible is installed. In this guide, the Ansible Server is set up on OpenSUSE Linux. Managed Nodes: The servers that are being managed by the Ansible control node. Inventory/Host File: The inventory file contains a list of host(s) IP(s) that a control node will manage. Playbook: Playbook is an automated script based on YAML that Ansible utilizes to perform automated tasks on the Managed Nodes. Let’s now start the initial configuration: Step 1: Install and Configure Ansible on the Control Node Let’s set up Ansible on the control node, i.e., installing Ansible on the Control node: sudo zypper install ansible The command will automatically select the required essentials (Python and its associated dependencies, especially): Here are the commands to install Ansible on other Linux distros: sudo dnf install ansible sudo apt install ansible sudo pacman -S ansible Let’s check the installed version: ansible --version Step 2: Create an Inventory/Hosts File on the Control Node The inventory file is by default located in the “/etc/ansible/hosts”. However, if it is not available, we can create it manually: sudo nano /etc/ansible/hosts Here, the [main] is the group representing specific servers. Similarly, we can create multiple groups in the same pattern to access the servers and perform the required operation on the group as a whole. Step 3: Install and Configure SSH on the Host Nodes Ansible communicates with the host nodes via SSH. Now, we’ll set up SSH on the host nodes (managed nodes). The process in this “Step” is performed on all the “Managed Nodes”. Let’s first install SSH on the system: sudo apt install openssh-server If you have managed nodes other than Ubuntu/Debian, you can use one of the following commands, as per your Linux distribution, to install SSH: sudo dnf install openssh-server sudo zypper install openssh sudo pacman -S openssh Since we have only one “Control Node”, for better security, we add a rule that only the SSH port can be accessed from the Control Node: sudo ufw allow ssh Note: If you have changed the SSH default port, then you have to mention the port name to open that specific port. Allow Specific IP on SSH Port: When configuring the Firewall on the Managed Nodes, you can only allow a specific IP to interact with the managed node on SSH. For instance, the command below will only allow the IP “192.168.140.142” to interact over the SSH port. sudo ufw allow from 192.168.140.142 to any port 22 Let’s reload the firewall: sudo ufw reload Confirming the firewall status: sudo ufw status Step 4: Create an Ansible User for Remote Connections Let’s use the “adduser” command to create a new user for Ansible. The Control Node only communicates through the Ansible user: sudo adduser <username> Adding it to a sudo group: sudo usermod -aG sudo <username> Creating a no-password login for this user only. Open the “/etc/sudoers” file and add the following line at the end of the file: Step 5: Set up SSH Key | Generate and Copy Let’s generate the SSH keys on the control node: ssh-keygen Now, copy these keys to the remote hosts: ssh-copy-id username@IP-address/hostname Note: There are multiple ways of generating and copying the SSH keys. Read our dedicated guide on “How to Set up SSH Keys” to have a detailed overview of how SSH keys work. Step 6: Test the Connection | Control Node to Host Nodes Once every step is performed error-free, let’s test the connection from the control node to the managed hosts. There are two ways to test the connection, i.e., one-to-one connection and one-to-many. The Ansible command below uses its “ping” module to test the connection from the Control Node to one of the hosts, i.e., linuxhint. ansible linuxhint -m ping -u <user-name> Here, the following Ansible command pings all the hosts that a Control Node has to manage: ansible all -m ping -u ansible_admin The success status paves the way to proceed further. Ansible Playbooks | Automate Linux in Minutes Ansible Playbook is an automated script that runs on the managed nodes (either all or the selected ones). Ansible Playbooks follow the YAML syntax that needs to be followed strictly to avoid any syntax errors. Let’s first have a quick overview of the YAML Syntax: Prerequisites: Understanding the YAML Basics YAML is the primary requirement for writing an Ansible playbook. Since it is a markup language thus its syntax must be followed properly to have an error-free playbook and execution. The main components of the YAML that need to be focused on at the moment to get started with Ansible playbooks are: Indentation → Defines the hierarchy and the overall structure. Only 2 spaces. Don’t use Tab. Key:Value Pairs → Defines the settings/parameters/states to assist the tasks in the playbook. Lists → In YAML, a list contains a series of actions to be performed. The list may act as an independent or can assist with any task. Variables → Just like other scripting/programming languages. The variables in YAML define dynamic values in a playbook for reusability. Dictionaries → Groups relevant “key:value” pairs under a single key, often for module parameters. Strings → Represents text values such as task names, messages, and optional quotes. The strings also have the same primary purpose, just like in other scripting/programming languages. That’s what helps you write Ansible playbooks. Variable File | To be used in the Ansible Playbook Here, we will be using a variable file, which is used in the Playbook for variable calling/assignment. The content of the Vars.yml file is as follows: There are three variables in this file, i.e., the package contains only one package, and the other two variables are “server_packages” and “other_utils,” which contain a group of packages. Step 1: Create an Ansible Playbook Let’s create a playbook file: sudo nano /etc/ansible/testplay.yml Here, the variables file named “vars.yml” is linked to this Playbook. At our first run, we will use the first variable named “package”: --- - hosts: allbecome: yes vars_files: - vars.yml tasks: - name: Install package apt: name: "{{ package }}" state: present Here: The “hosts: all” states that this playbook will be implemented on all the hosts listed in the hosts/inventory file. “become: yes” elevates the permissions, i.e., useful when running the commands that require root privileges. “Vars_file” calls the variable files. The “tasks” contain the tasks to be implemented in this playbook. There is only one task in this playbook: The task is named “Install package”, with the “apt” module, and the variable “name” to be used from the variable file. Step 2: Automate the Tasks Before implementing this playbook, we can have a dry run of the playbook on all the servers to check for its successful execution. Here’s the command to do so: ansible-playbook /etc/ansible/testplay.yml -u ansible_admin --check Let’s run the newly created playbook with the created user: ansible-playbook /etc/ansible/testplay.yml -u ansible_admin Note: We can also provide the hosts/inventory file location (if it is not at the default location, i.e., /etc/ansible/) here as well, i.e., using the “-i” option and providing the path of the inventory file. Similarly, we can use other variable groups mentioned in the variable file as well. For instance, the following playbook now calls the “server_packages” variable and installs the server as per their availability: --- - hosts: allbecome: yes vars_files: - vars.yml tasks: - name: Install package apt: name: "{{ server_packages }}" state: present Here, the “become: yes” is used for the root permissions. This is used when the tasks require root privileges. The task in this playbook utilizes different variables from the variable file. Let’s dry-run the playbook on the managed nodes using the below command: ansible-playbook /etc/ansible/testplay.yml -u ansible_admin --check All green states that the playbook will be successfully implemented. Remove the “–check” flag from the above command to implement the playbook. That’s all about the main course of this article. Since Ansible is backed up by a list of commands, we have compiled a list of commands necessary for beginners to understand while using Ansible. Bonus: Ansible 101 Commands Ansible is an essential automation server with a long list of its own commands to manage the overall server operations. Here’s the list of Ansible commands that would be useful for all those using Ansible or aiming to use Ansible in the future: Command(s) Purpose ansible -i <inventory/host-file> all -m ping Test Ansible’s connectivity with all the hosts in the inventory/hosts file. ansible-playbook -i <inventory/host-file> <playbook> Executes the <playbook> to operate on the hosts/managed nodes. ansible-playbook -i <inventory/hosts-file> <playbook> –check Simulates the playbook without making changes to the target systems/managed nodes. ansible-playbook -i <inventory/hosts-file> <playbook> –syntax-check Checks the YAML syntax ansible -i <inventory/hosts-file> <group> -m command -a “<shell-command>” Executes a specific shell command on the managed nodes. ansible-playbook -i <inventory/hosts-file> <playbook> -v Executes the playbook with verbose output. Use -vv for more detailed options. ansible-inventory -i <inventory_file> –list Displays all hosts/groups in the inventory file to verify the configurations. Note: If the inventory/hosts file is at the default location (/etc/ansible/), we can skip the “-i” flag used in the above commands. For a complete demonstration of the Ansible CLI Cheat Sheet, please see the Ansible documentation – Ansible CLI Cheat Sheet. Conclusion To get started with Ansible, first, install Ansible on one system (Control Node), then install and configure SSH on the remote hosts (Managed Nodes). Now, generate the SSH keys on the Control Node and copy the key to the Managed Nodes. Once the connectivity is resolved, configure the inventory file and write the playbook. That’s it. The Ansible will be configured and ready to run. All these steps are practically demonstrated in this guide. Just go through the guide and let us know if you have any questions or anything that is difficult to understand. We would assist with Ansible’s installation and configuration.
-
FOSS Weekly #25.27: System Info, Retro Tools, Fedora 32-bit Update, Torvalds vs Bcachefs and More Linux Stuff
by: Abhishek Prakash Thu, 03 Jul 2025 05:13:51 GMT And we achieved the goal of 75 new lifetime members. Thank you for that 🙏🙏 I think I have activated it for everyone, even for members who didn't explicitly notify me after the payment. But if anyone is still left out, just send me an email. By the way, all the logged-in Plus members can download the 'Linux for DevOps' eBook from this page. I'll be adding a couple of more ebooks (created and extended from existing content) for the Plus members. 💬 Let's see what else you get in this edition Bcachefs running into trouble.A new Rust-based GPU driver.Google giving the Linux Foundation a gift.And other Linux news, tips, and, of course, memes!📰 Linux and Open Source NewsdigiKam 8.7 is here with many upgrades.Tyr is a new Rust-based driver for Arm Mali GPUs.Claudia is an open source GUI solution for Claude AI coding.Broadcom has been bullying enterprises with VMware audits.Google has donated the A2A protocol to the Linux foundation.Murena Fairphone (Gen. 6) has been introduced with some decent specs.Warp 2.0 is here with AI agents, better terminal tools, and more.Cloudflare has released Orange Me2eets, an E2EE video calling solution.Bazzite was looking at a grim future. Luckily, the proposal to retire 32-bit support on Fedora has been dropped, for now.🧠 What We’re Thinking AboutA new Linux kernel drama has unfolded, this time, it's Bcachefs. New Linux Kernel Drama: Torvalds Drops Bcachefs Support After ClashThings have taken a bad turn for Bcachefs as Linux supremo Linus Torvalds is not happy with their objections.It's FOSS NewsSourav RudraWhen you are done with that, you can go through LibreOffice's technical dive of the ODF file format. 🧮 Linux Tips, Tutorials and MoreThere are some superb privacy-focused Notion alternatives out there.Learn a thing or two about monitoring CPU and GPU temperatures in your Linux system.Although commands like inxi are there, this GUI tool gives you an easy way to list the hardware configuration of your computer in Linux.Similarly, there are plenty of CLI tools for system monitoring, but you also have GUI-based task managers.Relive the nostalgia with these tools to get a retro vibe on Linux. Relive the Golden Era: 5 Tools to Get Retro Feel on LinuxGet retro vibe on Linux with these tools.It's FOSSAbhishek Prakash Desktop Linux is mostly neglected by the industry but loved by the community. For the past 13 years, It's FOSS has been helping people use Linux on their personal computers. And we are now facing the existential threat from AI models stealing our content. If you like what we do and would love to support our work, please become It's FOSS Plus member. It costs $24 a year (less than the cost of a burger meal each month) and you get an ad-free reading experience with the satisfaction of helping the desktop Linux community. Join It's FOSS Plus 👷 Homelab and Hardware CornerI have received the Pironman Max case for review and have assembled it too. I am looking forward to having a RAID setup for fun on it. I'll keep you posted if I made it or not 😄 Pironman 5-Max: The Best Raspberry Pi 5 Case Just Got UpgradedAnd the first 500 get a 25% pre-order discount offer. So hurry up with the purchase.It's FOSS NewsSourav Rudra✨ Project HighlightAnduinOS is in the spotlight lately, have you checked it out? A New Linux Distro Has Set Out To Look Like Windows 11: I Try AnduinOS!We take a brief look at AnduinOS, trying to mimic the Windows 11 look. Is it worth it?It's FOSS NewsSourav Rudra📽️ Videos I am Creating for YouSee a better top in action in the latest video. Subscribe to It's FOSS YouTube Channel🧩 Quiz TimeThis quiz will test your knowledge of Apt. Apt Command QuizDebian or Ubuntu user? This is the apt quiz for you. Pun intended, of course :)It's FOSSAbhishek Prakash💡 Quick Handy TipThe Dolphin file manager offers you a selection mode. To activate it, press the Space bar. In this view, you can single click on a file/folder to select them. Here, you will notice that a quick access bar appears at the bottom when you select items, offering actions like Copy, Cut, Rename, Move to Trash, etc. 🤣 Meme of the Week🗓️ Tech TriviaThe IBM 650, introduced on July 2, 1953, was one of the first widely used computers, featuring a magnetic drum for storage and using punch cards for programming. With a memory capacity of 20,000 decimal digits, it became a workhorse for businesses and universities throughout the 1950s. 🧑🤝🧑 FOSSverse CornerCanonical is making some serious bank, and our FOSSers have noticed. Ubuntu Maker Canonical Generated Nearly $300M In Revenue Last YearHow do they do this sum, its not from the desktop free version, can only guess its server technologyIt's FOSS Communitycallpaul.eu (Paul)❤️ With lovePlease share it with your Linux-using friends and encourage them to subscribe (hint: it's here). Share the articles in Linux Subreddits and community forums. Follow us on Google News and stay updated in your News feed. Opt for It's FOSS Plus membership and support us 🙏 Enjoy FOSS 😄