Jump to content

Blogger

Blog Bot
  • Joined

  • Last visited

    Never

Blog Entries posted by Blogger

  1. Chris’ Corner: Word Search

    by: Chris Coyier
    Mon, 29 Sep 2025 17:01:13 +0000

    My daughter had a little phase of being into Word Searches. I found it to be a cool dad moment when I was like “I’ll make you a tool to make them!”. That’s what she was into. She liked doing them OK, but she really liked the idea of making them. So my tool starts with a blank grid where you can type in your words, then fill in the blanks with random letters, then print it.
    Perhaps unsurprisingly, this type of simple game with simple well-defined interactions is rife for front-end developer experimentation.
    Interestingly, I’ve found most takes on HTML/CSS/JavaScript Word Searches to be more about the experience of solving them, which is just as interesting! Let’s look at some.
    Christian Collosi’s Version on <canvas>
    Canvas is nice here as lines and arcs can be drawn at really specific coordinates to “circle” the words as you interact with it, and stay circled when you find a correct word.
    Mads Stoumann’s Pure CSS Version
    You click on the letters individually, and if the ones you have clicked on match a word, it changes background to let you know you’ve got it. This all happens with <input type="checkbox">s and simpler-than-you’d-think :has(:checked + :checked ...) selectors.
    Kevin Newcombe’s Responsive word search
    The responsive-ness of Kevin’s approach here is actually really cool. It doesn’t just scale, it literally changes the columns/rows of the puzzle itself. But I’m actually even more into the SVG-drawn lines where you make guesses and the SVG-drawn circles around the successful guesses. Some sort of similar work here.
    Kit Jenson’s Word Search in Color
    Gotta love the aesthetics here! Just not a game you usually see a lot of color in, so nice to see some playing in that direction.
    Part of what makes me so damn proud of the CodePen community is that this is really the tip of the iceberg of experimentation in this very niche thing. Go around exploring for this sort of thing and you’ll find loads more.
  2. by: Juan Diego Rodríguez
    Mon, 29 Sep 2025 14:31:16 +0000

    A couple of days ago, the Apple team released Safari 26.0! Is it a big deal? I mean, browsers release new versions all the time, where they sprinkle in a couple or few new features. They are, of course, all useful, but there aren’t usually a lot of “big leaps” between versions. Safari 26 is different, though. It introduces a lot of new stuff. To be precise, it adds: 75 new features, 3 deprecations, and 171 other improvements.
    I’d officially call that a big deal.
    The WebKit blog post does an amazing job breaking down each of the new (not only CSS) features. But again, there are so many that the new stuff coming to CSS deserves its own spotlight. So, today I want to check (and also try) what I think are the most interesting features coming to Safari.
    If you are like me and don‘t have macOS to test Safari, you can use Playwright instead.
    What’s new (to Safari)?
    Safari 26 introduces several features you may already know from prior Chrome releases. And… I can’t blame Safari for seemingly lagging behind because Chrome is shipping new CSS at a scarily fast pace. I appreciate that browsers stagger releases so they can refine things against each other. Remember when Chrome initially shipped position-area as inset-area? We got better naming between the two implementations.
    I think what you’ll find (as I did) that many of these overlapping features are part of the bigger effort towards Interop 2025, something WebKit is committed to. So, let’s look specifically at what’s new in Safari 26… at least that’s new to Safari.
    Anchor positioning
    Anchor positioning is one of my favorite features (I wrote the guide on it!), so I am so glad it’s arrived in Safari. We are now one step closer to widely available support which means we’re that much closer to using anchor positioning in our production work.
    With CSS Anchor Positioning, we can attach an absolutely-positioned element (that we may call a “target”) to another element (that we may call an “anchor”). This makes creating things like tooltips, modals, and pop-ups trivial in CSS, although it can be used for a variety of layouts.
    Using anchor positioning, we can attach any two elements, like these, together. It doesn’t even matter where they are in the markup.
    <div class="anchor">anchor</div> <div class="target">target</div> Heads up: Even though the source order does not matter for positioning, it does for accessibility, so it’s a good idea to establish a relationship between the anchor and target using ARIA attributes for better experiences that rely on assistive tech.
    CodePen Embed Fallback We register the .anchor element using the anchor-name property, which takes a dashed ident. We then use that ident to attach the .target to the .anchor using the position-anchor property.
    .anchor { anchor-name: --my-anchor; /* the ident */ } .target { position: absolute; position-anchor: --my-anchor; /* attached! */ } This positions the .target at the center of the .anchor — again, no matter the source order! If we want to position it somewhere else, the simplest way is using the position-area property.
    With position-area, we can define a region around the .anchor and place the .target in it. Think of it like drawing a grid of squares that are mapped to the .anchor‘s center, top, right, bottom and left.
    For example, if we wish to place the target at the anchor’s top-right corner, we can write…
    .target { /* ... */ position-area: top right; } CodePen Embed Fallback This is just a taste since anchor positioning is a world unto itself. I’d encourage you to read our full guide on it.
    Scroll-driven animations
    Scroll-driven animations link CSS animations (created from @keyframes) to an element’s scroll position. So instead of running an animation for a given time, the animation will depend on where the user scrolls.
    We can link an animation to two types of scroll-driven events:
    Linking the animation to a scrollable container using the scroll() function. Linking the animation to an element’s position on the viewport using the view() function. Both of these functions are used inside the animation-timeline, which links the animation progress to the type of timeline we’re using, be it scroll or view. What’s the difference?
    With scroll(), the animation runs as the user scrolls the page. The simplest example is one of those reading bars that you might see grow as you read down the page. First, we define our everyday animation and add it to the bar element:
    @keyframes grow { from { transform: scaleX(0); } to { transform: scaleX(1); } } .progress { transform-origin: left center; animation: grow linear; } Note: I am setting transform-origin to left so it the animation progresses from the left instead of expanding from the center.
    Then, instead of giving the animation a duration, we can plug it into the scroll position like this:
    .progress { /* ... */ animation-timeline: scroll(); } Assuming you’re using Safari 26 or the latest version of Chrome, the bar grows in width from left to right as you scroll down the viewport.
    CodePen Embed Fallback The view() function is similar, but it bases the animation on the element’s position when it is in view of the viewport. That way, an animation can start or stop at specific points on the page. Here’s an example making images “pop” up as they enter view.
    @keyframes popup { from { opacity: 0; transform: translateY(100px); } to { opacity: 1; transform: translateY(0px); } } img { animation: popup linear; } Then, to make the animation progress as the element enters the viewport, we plug the animation-timeline to view().
    img { animation: popup linear; animation-timeline: view(); } If we leave like this, though, the animation ends just as the element leaves the screen. The user doesn’t see the whole thing! What we want is for the animation to end when the user is in the middle of the viewport so the full timeline runs in view.
    This is where we can reach for the animation-range property. It lets us set the animation’s start and end points relative to the viewport. In this specific example, let’s say I want the animation to start when the element enters the screen (i.e., the 0% mark) and finishes a little bit before it reaches the direct center of the viewport (we’ll say 40%):
    img { animation: popup linear; animation-timeline: view(); animation-range: 0% 40%; } CodePen Embed Fallback Once again, scroll-driven animations go way beyond these two basic examples. For a quick intro to all there is to them, I recommend Geoff’s notes.
    I feel safer using scroll-drive animations in my production work because it’s more of a progressive enhancement that won’t break an experience even if it is not supported by the browser. Even so, someone may prefer reduced (or no) animation at all, meaning we’d better progressively enhance it anyway with prefers-reduced-motion.
    The progress() function
    This is another feature we got in Chrome that has made its way to Safari 26. Funny enough, I missed it in Chrome when it released a few months ago, so it makes me twice as happy to see such a handy feature baked into two major browsers.
    The progress() function tells you how much a value has progressed in a range between a starting point and an ending point:
    progress(<value>, <start>, <end>) If the <value> is less than the <start>, the result is 0. If the <value> reaches the <end>, the result is 1. Anything in between returns a decimal between 0 and 1.
    Technically, this is something we can already do in a calc()-ulation:
    calc((value - start) / (end - start)) But there’s a key difference! With progress(), we can calculate values from mixed data types (like adding px to rem), which isn’t currently possible with calc(). For example, we can get the progress value formatted in viewport units from a numeric range formatted in pixels:
    progress(100vw, 400px, 1000px); …and it will return 0 when the viewport is 400px, and as the screen grows to 1000px, it progresses to 1. This means it can typecast different units into a number, and as a consequence, we can transition properties like opacity (which takes a number or percentage) based on the viewport (which is a distance length).
    There’s another workaround that accomplishes this using tan() and atan2() functions. I have used that approach before to create smooth viewport transitions. But progress() greatly simplifies the work, making it much more maintainable.
    Case in point: We can orchestrate multiple animations as the screen size changes. This next demo takes one of the demos I made for the article about tan() and atan2(), but swaps that out with progress(). Works like a charm!
    CodePen Embed Fallback That’s a pretty wild example. Something more practical might be reducing an image’s opacity as the screen shrinks:
    img { opacity: clamp(0.25, progress(100vw, 400px, 1000px), 1); } Go ahead and resize the demo to update the image’s opacity, assuming you’re looking at it in Safari 26 or the latest version of Chrome.
    CodePen Embed Fallback I’ve clamp()-ed the progress() between 0.25 and 1. But, by default, progress() already clamps the <value> between 0 and 1. According to the WebKit release notes, the current implementation isn’t clamped by default, but upon testing, it does seem to be. So, if you’re wondering why I’m clamping something that’s supposedly clamped already, that’s why.
    An unclamped version may come in the future, though.
    Self-alignment in absolute positioning
    And, hey, check this out! We can align-self and justify-self content inside absolutely-positioned elements. This isn’t as big a deal as the other features we’ve looked at, but it does have a handy use case.
    For example, I sometimes want to place an absolutely-positioned element directly in the center of the viewport, but inset-related properties (i.e., top, right, bottom, left, etc.) are relative to the element’s top-left corner. That means we don’t get perfectly centered with something like this as we’d expect:
    .absolutely-positioned { position: absolute; top: 50%; left: 50%; } From here, we could translate the element by half to get things perfectly centered. But now we have the center keyword supported by align-self and justify-self, meaning fewer moving pieces in the code:
    .absolutely-positioned { position: absolute; justify-self: center; } Weirdly enough, I noticed that align-self: center doesn’t seem to center the element relative to the viewport, but instead relative to itself. I found out that can use the anchor-center value to center the element relative to its default anchor, which is the viewport in this specific example:
    .absolutely-positioned { position: absolute; align-self: anchor-center; justify-self: center; } CodePen Embed Fallback And, of course, place-self is a shorthand for the align-self and justify-self properties, so we could combine those for brevity:
    .absolutely-positioned { position: absolute; place-self: anchor-center center; } What’s new (for the web)?
    Safari 26 isn’t just about keeping up with Chrome. There’s a lot of exciting new stuff in here that we’re getting our hands on for the first time, or that is refined from other browser implementations. Let’s look at those features.
    The constrast-color() function
    The constrast-color() isn’t new by any means. It’s actually been in Safari Technology Preview since 2021 where it was originally called color-contrast(). In Safari 26, we get the updated naming as well as some polish.
    Given a certain color value, contrast-color() returns either white or black, whichever produces a sharper contrast with that color. So, if we were to provide coral as the color value for a background, we can let the browser decide whether the text color is more contrasted with the background as either white or black:
    h1 { --bg-color: coral; background-color: var(--bg-color); color: contrast-color(var(--bg-color)); } CodePen Embed Fallback Our own Daniel Schwarz recently explored the contrast-color() function and found it’s actually not that great at determining the best contrast between colors:
    It sucks because there are cases where neither white nor black produces enough contrast with the provided color to meet WCAG color contrast guidelines. There is an intent to extend contrast-color() so it can return additional color values, but there still would be concerns about how exactly contrast-color() arrives at the “best” color, since we would still need to take into consideration the font’s width, size, and even family. Always check the actual contrast!
    So, while it’s great to finally have constrat-color(), I do hope we see improvements added in the future.
    Pretty text wrapping
    Safari 26 also introduces text-wrap: pretty, which is pretty (get it?) straightforward: it makes paragraphs wrap in a prettier way.
    You may remember that Chrome shipped this back in 2023. But take notice that there is a pretty (OK, that’s the last time) big difference between the implementations. Chrome only avoids typographic orphans (short last lines). Safari does more to prettify the way text wraps:
    Prevents short lines. Avoids single words at the end of the paragraph. Improves rag. Keeps each line relatively the same length. Reduces hyphenation. When enabled, hyphenation improves rag but also breaks words apart. In general, hyphenation should be kept to a minimum. The WebKit blog gets into much greater detail if you’re curious about what considerations they put into it.
    Safari takes additional actions to ensure “pretty” text wrapping, including the overall ragging along the text. This is just the beginning!
    I think these are all the CSS features coming to Safari that you should look out for, but I don’t want you to think they are the only features in the release. As I mentioned at the top, we’re talking about 75 new Web Platform features, including HDR Images, support for SVG favicons, logical property support for overflow properties, margin trimming, and much, much more. It’s worth perusing the full release notes.
    Touring New CSS Features in Safari 26 originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  3. by: Hangga Aji Sayekti
    Mon, 29 Sep 2025 13:43:24 +0530

    Linux may feel secure, but it’s not immune to malware. Servers, VPS, and even IoT devices can be targeted by malicious actors. The good news? You can detect and defend against some threats using ClamAV, an open-source antivirus engine.
    Now, ClamAV is not your typical antivirus, as it may throw plenty of false positives and you need to analyze if there are actual threats.
    In this guide, I’ll show you how to simulate malware attacks safely using the EICAR test file (don’t worry, it’s completely harmless!) and how to detect them on your Linux system. By the end, you’ll know how to integrate ClamAV into your workflow for continuous protection.
    Step 1: Using ClamAV
    ClamAV is an open-source antivirus engine designed for detecting malware, trojans, viruses, and other malicious threats on Linux and Unix systems.
    Linux servers often host websites, applications, or shared directories. Even though Linux is generally secure, malware can still sneak in via uploads, compromised repos, or misconfigured services.
    ClamAV helps detect malicious files before they cause damage, acting as a first line of defense in a multi-layered security strategy.
    How ClamAV works:
    Signature updates: ClamAV downloads the latest malware definitions via freshclam. Scanning: You scan files or directories with clamscan or clamdscan. Detection: If a file matches a known signature, ClamAV reports it as infected. Optional removal: You can remove or quarantine infected files automatically. Installing ClamAV is straightforward. On Debian-based systems, run:
    sudo apt update sudo apt install clamav clamav-daemon -y clamav : the main scanning engine clamav-daemon : keeps ClamAV running in the background for faster scans Start the daemon if you want it running continuously:
    sudo systemctl start clamav-daemon Step 2: Update Virus Signatures
    Before scanning anything, make sure your virus definitions are up-to-date:
    sudo freshclam ClamAV relies on a signature database. If it’s outdated, it might miss newer malware.
    Step 3: Simulate a Malware Attack
    Before you test ClamAV, you need a “fake malware” file. Enter EICAR.
    Think of EICAR as a “fake virus” that is completely safe. It was created specifically to test antivirus software without putting your computer at risk.
    EICAR is not a real virus. it cannot harm your files or steal your data. Antivirus programs, including ClamAV, will detect this file as if it were malware. This allows you to check whether your antivirus is working correctly. A good way for demonstrating malware detection in labs or tutorials.
    Download the EICAR test file:
    curl -O https://secure.eicar.org/eicar.com.txt There are other types of test files too:
    Step 4: Scan the Test File
    Scan the file manually:
    clamscan eicar.com.txt Scan the directories:
    clamscan -r /<directory> 🚧You can also choose to automatically delete files if malware is detected. I advise against it. You must be careful about auto deletion.clamscan --remove /<directory> Step 5: Automate Scans with Cron
    You can use cron jobs to schedule scans to run automatically. For example, to scan /var/www every day at 3 AM and log results:
    0 3 * * * clamscan -r /var/www --log=/var/log/clamav/scan.log 💡Regular scans catch new files before they can cause harm, especially in upload directories.Conclusion
    ClamAV is signature-based, meaning it works best for malware that’s already known and included in its database. It won't work with zero-day malware, fileless attacks, obfuscated/polymorphic malware. Still, it is good to have some sort of defense, especially for scanning new uploaded files and cloned repositories.
    Of course, up your defenses with firewalls & log monitoring, sandbox the uploaded files and follow best practices.
  4. by: Abhishek Kumar
    Sat, 27 Sep 2025 11:22:07 GMT

    There are plenty of cool cases you can buy for your Raspberry Pi. But here's the thing. Mass-produce products often restrict creativity. And I am glad to live in a connected creative internet where people share their creations with each other.
    I am going to share some of my favorite 3D printed Raspberry Pi cases I have come across. You may not necessarily purchase them but most of them share their models and files. This gives you an opportunity to test your and your 3D printer's DIY skills.
    1. Industrial design Pi case
    Source: musicalbigfoot via PrintablesThis case feels like it belongs on the bridge of a spaceship. With its sharp, geometric edges and rugged profile, it looks both futuristic and industrial without going over the top.
    It’s practical, too: built to snap together cleanly and handle a 40 mm fan without fuss.
    Features:
    Ventilation-ready with fan support Printed in five pieces, no supports needed Removable sections for ribbon cable access Held together with screws and heat-set inserts for extra strength Printables2. Desktop powerhouse with UPS
    Source: dafa1 via MyMiniFactoryIs it a mini PC? Is it a Raspberry Pi? This case says: why not both.
    Designed to mimic a high-end gaming rig, it comes complete with a see-through side panel and enough room to tuck in a UPS module for portable or critical-use builds.
    Features:
    Acrylic side panel for showcase-worthy builds Space for UPS backup battery Optional OLED display integration PC-style heatsink support for serious cooling MyMiniFactoryAlthough, you can purchase a tower case like this for your Pi.
    Pironman 5 Case With Tower Cooler and Fan
    This dope Raspberry Pi 5 case has a tower cooler and dual RGB fans to keep the device cool. It also extends your Pi 5 with M.2 SSD slot and 2 standard HDMI ports.
    Explore Pironman 5 3. Mini tower with neon vibes
    Source: JISpal01 via ThingiverseThis one’s for people who believe tiny tech deserves big style. Designed to house a real heatsink and twin fans, this tower case lights up with neon flair that looks straight out of a cyberpunk anime.
    Features:
    Dual-fan duct system with efficient airflow Designed to reduce filament waste Easy to assemble with YouTube video support Built to house a functional Ice Tower cooler Thingiverse4. The Black (Pi) hole
    Source: OutpostKodelia via ThingiverseThis might be the most mysterious Pi case ever made. A black hole case for Pi-hole.
    It's not sleek that can be seen, a bit compact, and definitely complex. And it's not for the faint of heart.
    Think of it as a black box from space: powerful, but you’ll need some build skills and patience to unlock its potential.
    Features:
    Requires thermal insert installation Detailed build guide included Great for intermediate to advanced users Thingiverse5. Sci-fi case
    Source: aggie6801 via ThingiversePart sculpture, part science experiment, this design is packed with personality.
    It looks like it teleported in from a parallel timeline where art and engineering are best friends. Best of all? It’s easy to print and assemble.
    Features:
    Stylish and functional blend Revised to fit larger heatsinks Requires just six screws Bold look with practical cooling Thingiverse6. Art Deco retro shell
    Source: theprintedcow via PrintablesThis one brings back the glamour of early tech design. Inspired by Art Deco, it combines sweeping lines with modern geometry and works perfectly with dual-color filament to make the design pop.
    Features:
    Supports Raspberry Pi 5 Snap-fit lid, no screws or supports required Works with the official cooling fan GPIO access preserved Printables7. Folding case
    Source: WalterHsiao via CultsMinimal without being boring, this folding case is perfect for people who move their Pi around a lot. It prints flat and folds into shape, like origami for hardware lovers or like the old-fashioned cigarette cases we see in classic detective shows and movies.
    Features:
    Prints flat, wraps around board No support material needed Great for swapping SD cards or quick access Available for multiple Pi models Cults8. Spaceship dock
    Source: tipam via ThingiverseThis one’s pure sci-fi goodness. Shaped like a spacecraft, it brings a galactic charm to your Raspberry Pi setup. It’s relatively easy to print, despite its detailed shape.
    Features:
    Compatible with Raspberry Pi 3, 4, and 5 Printed with flat bottom for support-free setup Requires minimal hardware to assemble Looks fantastic with a metallic filament Printables9. Pac-Man & Ghost Duo
    Source: tomvdb via MyMiniFactoryNostalgia incoming! These two are straight out of the arcade era, one shaped like Pac-Man, the other like his ghostly nemesis.
    They’re fun, loud, and absolutely not trying to blend in.
    Features:
    Built-in vents for passive cooling Perfect for gaming emulator setups Supports Raspberry Pi 3 Just add paint or stickers for the finishing touch MyMiniFactory10. PlayingStation 5
    Source: Ubermeisters via PrintablesThis isn't just a Raspberry Pi case, it’s a mini console with serious flair.
    Styled after the PS5, this case is ideal for turning your Pi into a dedicated gaming station.
    Features:
    Full "console" enclosure with detailed styling Designed for Raspberry Pi 4 and 5 Includes magnet slots for satisfying case snap Comes with STEP file for mods and upgrades Printables11. Pi 64
    Source: elhuff via ThingiverseBuilt to mimic the iconic Nintendo 64, this case hits all the right notes for retro gaming fans.
    The design even includes suggested colors and detailed assembly instructions. Just add RetroPie and prepare to time travel.
    Features:
    N64-inspired design with SD card access Designed for Raspberry Pi 3 and 4 Includes color suggestions for full nostalgia effect Widely loved with thousands of downloads Thingiverse12. Mini NES
    Source: serzi via ThingiverseIf you missed the NES Classic Edition craze, no worries, this 3D printed case lets you build your own.
    Designed to hold a Raspberry Pi 3, it’s perfect for an emulation setup and can be color-customized to your heart’s content.
    Features:
    NES-inspired enclosure Prints easily, no supports required Works great with RetroPie Personalize it with your own paint scheme ThingiverseAlthough you can buy a similar case for just $11 from SunFounder.
    SunFounder Retrogame Case for Raspberry Pi 5Features * Retro Design: Classic design replicating traditional gaming consoles, providing protection and a nostalgic gaming experience.* High-Quality Materials: Durable ABS material with meticulous craftsmanship ensures sturdiness and protection for your Raspberry Pi 5.* Easy Port Access: Provides easy access to all RSunFounderSunFounder13. Appleberry G5
    Source: MroznyHipis via PrintablesThis one’s a cheeky blend of Apple’s G5 design and Raspberry Pi smarts.
    Styled after the "cheese grater" Mac Pro, it’s compact and has a clever drawer-slide system for mounting the Pi inside.
    Features:
    Snap-in internal drawer design Uses just four M2 screws Magnet slots for secure slide-in action Looks sleek on any desk PrintablesFinal Thoughts
    This is definitely not an exhaustive list. There are plenty more interesting Raspberry Pi cases you can 3D print. Look at the case below that my outie loves.
    I know what you might be thinking, these cases aren’t all about practicality, or keeping the Pi small and discreet. But that’s not the point of this article.
    This was about exploration. About expression. About finding joy in a tiny computer that can wear whatever outfit we imagine for it. And honestly, I find these projects absolutely delightful.
    What you just saw are some of the most imaginative, playful, and downright fascinating Raspberry Pi cases out there. The creativity of the community never fails to surprise and inspire me.
    I’m sure I’ve missed a few fan favorites, o if you’ve designed or printed your own custom Pi cases, I’d love to see them. Share your creations with us!
    We’ll be back with more fascinating Raspberry Pi projects soon. Stay tuned.
  5. by: John Rhea
    Fri, 26 Sep 2025 14:45:13 +0000

    I always see this Google Gemini button up in the corner in Gmail. When you hover over it, it does this cool animation where the little four-pointed star spins and the outer shape morphs between a couple different shapes that are also spinning.
    I challenged myself to recreate the button using the new CSS shape() function sprinkled with animation to get things pretty close. Let me walk you through it.
    Drawing the Shapes
    Breaking it down, we need five shapes in total:
    Four-pointed star Flower-ish thing (yes, that’s the technical term) Cylinder-ish thing (also the correct technical term) Rounded hexagon Circle I drew these shapes in a graphics editing program (I like Affinity Designer, but any app that lets you draw vector shapes should work), outputted them in SVG, and then used a tool, like Temani Afif’s generator, to translate the SVG paths the program generated to the CSS shape() syntax.
    Now, before I exported the shapes from Affinity Designer, I made sure the flower, hexagon, circle, and cylinder all had the same number of anchor points. If they don’t have the same number, then the shapes will jump from one to the next and won’t do any morphing. So, let’s use a consistent number of anchor points in each shape — even the circle — and we can watch these shapes morph into each other.
    I set twelve anchor points on each shape because that was the highest amount used (the hexagon had two points near each curved corner).
    Something related (and possibly hard to solve, depending on your graphics program) is that some of my shapes were wildly contorted when animating between shapes. For example, many shapes became smaller and began spinning before morphing into the next shape, while others were much more seamless. I eventually figured out that the interpolation was matching each shape’s starting point and continued matching points as it followed the shape.
    The result is that the matched points move between shapes, so if the starting point for one shape is on opposite side of the starting point of the second shape, a lot of movement is necessary to transition from one shape’s starting point to the next shape’s starting point.
    CodePen Embed Fallback Luckily, the circle was the only shape that gave me trouble, so I was able to spin it (with some trial and error) until its starting point more closely matched the other starting points.
    Another issue I ran into was that the cylinder-ish shape had two individual straight lines in shape() with line commands rather than using the curve command. This prevented the animation from morphing into the next shape. It immediately snapped to the next image without animating the transition, skipping ahead to the next shape (both when going into the cylinder and coming out of it).
    I went back into Affinity Designer and ever-so-slightly added curvature to the two lines, and then it morphed perfectly. I initially thought this was a shape() quirk, but the same thing happened when I attempted the animation with the path() function, suggesting it’s more an interpolation limitation than it is a shape() limitation.
    Once I finished adding my shape() values, I defined a CSS variable for each shape. This makes the later uses of each shape() more readable, not to mention easier to maintain. With twelve lines per shape the code is stinkin’ long (technical term) so we’ve put it behind an accordion menu.
    View Shape Code :root { --hexagon: shape( evenodd from 6.47% 67.001%, curve by 0% -34.002% with -1.1735% -7.7% / -1.1735% -26.302%, curve by 7.0415% -12.1965% with 0.7075% -4.641% / 3.3765% -9.2635%, curve by 29.447% -17.001% with 6.0815% -4.8665% / 22.192% -14.1675%, curve by 14.083% 0% with 4.3725% -1.708% / 9.7105% -1.708%, curve by 29.447% 17.001% with 7.255% 2.8335% / 23.3655% 12.1345%, curve by 7.0415% 12.1965% with 3.665% 2.933% / 6.334% 7.5555%, curve by 0% 34.002% with 1.1735% 7.7% / 1.1735% 26.302%, curve by -7.0415% 12.1965% with -0.7075% 4.641% / -3.3765% 9.2635%, curve by -29.447% 17.001% with -6.0815% 4.8665% / -22.192% 14.1675%, curve by -14.083% 0% with -4.3725% 1.708% / -9.7105% 1.708%, curve by -29.447% -17.001% with -7.255% -2.8335% / -23.3655% -12.1345%, curve by -7.0415% -12.1965% with -3.665% -2.933% / -6.334% -7.5555%, close ); --flower: shape( evenodd from 17.9665% 82.0335%, curve by -12.349% -32.0335% with -13.239% -5.129% / -18.021% -15.402%, curve by -0.0275% -22.203% with -3.1825% -9.331% / -3.074% -16.6605%, curve by 12.3765% -9.8305% with 2.3835% -4.3365% / 6.565% -7.579%, curve by 32.0335% -12.349% with 5.129% -13.239% / 15.402% -18.021%, curve by 20.4535% -0.8665% with 8.3805% -2.858% / 15.1465% -3.062%, curve by 11.58% 13.2155% with 5.225% 2.161% / 9.0355% 6.6475%, curve by 12.349% 32.0335% with 13.239% 5.129% / 18.021% 15.402%, curve by 0.5715% 21.1275% with 2.9805% 8.7395% / 3.0745% 15.723%, curve by -12.9205% 10.906% with -2.26% 4.88% / -6.638% 8.472%, curve by -32.0335% 12.349% with -5.129% 13.239% / -15.402% 18.021%, curve by -21.1215% 0.5745% with -8.736% 2.9795% / -15.718% 3.0745%, curve by -10.912% -12.9235% with -4.883% -2.2595% / -8.477% -6.6385%, close ); --cylinder: shape( evenodd from 10.5845% 59.7305%, curve by 0% -19.461% with -0.113% -1.7525% / -0.11% -18.14%, curve by 10.098% -26.213% with 0.837% -10.0375% / 3.821% -19.2625%, curve by 29.3175% -13.0215% with 7.2175% -7.992% / 17.682% -13.0215%, curve by 19.5845% 5.185% with 7.1265% 0% / 13.8135% 1.887%, curve by 9.8595% 7.9775% with 3.7065% 2.1185% / 7.035% 4.8195%, curve by 9.9715% 26.072% with 6.2015% 6.933% / 9.4345% 16.082%, curve by 0% 19.461% with 0.074% 1.384% / 0.0745% 17.7715%, curve by -13.0065% 29.1155% with -0.511% 11.5345% / -5.021% 21.933%, curve by -26.409% 10.119% with -6.991% 6.288% / -16.254% 10.119%, curve by -20.945% -5.9995% with -7.6935% 0% / -14.8755% -2.199%, curve by -8.713% -7.404% with -3.255% -2.0385% / -6.1905% -4.537%, curve by -9.7575% -25.831% with -6.074% -6.9035% / -9.1205% -15.963%, close ); --star: shape( evenodd from 50% 24.787%, curve by 7.143% 18.016% with 0% 0% / 2.9725% 13.814%, curve by 17.882% 7.197% with 4.171% 4.2025% / 17.882% 7.197%, curve by -17.882% 8.6765% with 0% 0% / -13.711% 4.474%, curve by -7.143% 16.5365% with -4.1705% 4.202% / -7.143% 16.5365%, curve by -8.6115% -16.5365% with 0% 0% / -4.441% -12.3345%, curve by -16.4135% -8.6765% with -4.171% -4.2025% / -16.4135% -8.6765%, curve by 16.4135% -7.197% with 0% 0% / 12.2425% -2.9945%, curve by 8.6115% -18.016% with 4.1705% -4.202% / 8.6115% -18.016%, close ); --circle: shape( evenodd from 13.482% 79.505%, curve by -7.1945% -12.47% with -1.4985% -1.8575% / -6.328% -10.225%, curve by 0.0985% -33.8965% with -4.1645% -10.7945% / -4.1685% -23.0235%, curve by 6.9955% -12.101% with 1.72% -4.3825% / 4.0845% -8.458%, curve by 30.125% -17.119% with 7.339% -9.1825% / 18.4775% -15.5135%, curve by 13.4165% 0.095% with 4.432% -0.6105% / 8.9505% -0.5855%, curve by 29.364% 16.9% with 11.6215% 1.77% / 22.102% 7.9015%, curve by 7.176% 12.4145% with 3.002% 3.7195% / 5.453% 7.968%, curve by -0.0475% 33.8925% with 4.168% 10.756% / 4.2305% 22.942%, curve by -7.1135% 12.2825% with -1.74% 4.4535% / -4.1455% 8.592%, curve by -29.404% 16.9075% with -7.202% 8.954% / -18.019% 15.137%, curve by -14.19% -0.018% with -4.6635% 0.7255% / -9.4575% 0.7205%, curve by -29.226% -16.8875% with -11.573% -1.8065% / -21.9955% -7.9235%, close ); } If all that looks like gobbledygook to you, it largely does to me too (and I wrote the shape() Almanac entry). As I said above, I converted them from stuff I drew to shape()s with a tool. If you can recognize the shapes from the custom property names, then you’ll have all you need to know to keep following along.
    Breaking Down the Animation
    After staring at the Gmail animation for longer than I would like to admit, I was able to recognize six distinct phases:
    First, on hover:
    The four-pointed star spins to the right and changes color. The fancy blue shape spreads out from underneath the star shape. The fancy blue shape morphs into another shape while spinning. The purplish color is wiped across the fancy blue shape. Then, after hover:
    The fancy blue shape contracts (basically the reverse of Phase 2). The four-pointed star spins left and returns to its initial color (basically the reverse of Phase 1). That’s the run sheet we’re working with! We’ll write the CSS for all that in a bit, but first I’d like to set up the HTML structure that we’re hooking into.
    The HTML
    I’ve always wanted to be one of those front-enders who make jaw-dropping art out of CSS, like illustrating the Sistine chapel ceiling with a single div (cue someone commenting with a CodePen doing just that). But, alas, I decided I needed two divs to accomplish this challenge, and I thank you for looking past my shame. To those of you who turned up your nose and stopped reading after that admission: I can safely call you a Flooplegerp and you’ll never know it.
    (To those of you still with me, I don’t actually know what a Flooplegerp is. But I’m sure it’s bad.)
    Because the animation needs to spread out the blue shape from underneath the star shape, they need to be two separate shapes. And we can’t shrink or clip the main element to do this because that would obscure the star.
    So, yeah, that’s why I’m reaching for a second div: to handle the fancy shape and how it needs to move and interact with the star shape.
    <div id="geminianimation"> <div></div> </div> The Basic CSS Styling
    Each shape is essentially defined with the same box with the same dimensions and margin spacing.
    #geminianimation { width: 200px; aspect-ratio: 1/1; margin: 50px auto; position: relative; } We can clip the box to a particular shape using a pseudo-element. For example, let’s clip a star shape using the CSS variable (--star) we defined for it and set a background color on it:
    #geminianimation { width: 200px; aspect-ratio: 1; margin: 50px auto; position: relative; &::before { content: ""; clip-path: var(--star); width: 100%; height: 100%; position: absolute; background-color: #494949; } } CodePen Embed Fallback We can hook into the container’s child div and use it to establish the animation’s starting shape, which is the flower (clipped with our --flower variable):
    #geminianimation div { width: 100%; height: 100%; clip-path: var(--flower); background: linear-gradient(135deg, #217bfe, #078efb, #ac87eb, #217bfe); } What we get is a star shape stacked right on top of a flower shape. We’re almost done with our initial CSS, but in order to recreate the animated color wipes, we need a much larger surface that allows us to “change” colors by moving the background gradient’s position. Let’s move the gradient so that it is declared on a pseudo element instead of the child div, and size it up by 400% to give us additional breathing room.
    #geminianimation div { width: 100%; height: 100%; clip-path: var(--flower); &::after { content: ""; background: linear-gradient(135deg, #217bfe, #078efb, #ac87eb, #217bfe); width: 400%; height: 400%; position: absolute; } } Now we can clearly see how the shapes are positioned relative to each other:
    CodePen Embed Fallback Animating Phases 1 and 6
    Now, I’ll admit, in my own hubris, I’ve turned up my very own schnoz at the humble transition property because my thinking is typically, Transitions are great for getting started in animation and for quick things, but real animations are done with CSS keyframes. (Perhaps I, too, am a Flooplegerp.)
    But now I see the error of my ways. I can write a set of keyframes that rotate the star 180 degrees, turn its color white(ish), and have it stay that way for as long as the element is hovered. What I can’t do is animate the star back to what it was when the element is un-hovered.
    I can, however, do that with the transition property. To do this, we add transition: 1s ease-in-out; on the ::before, adding the new background color and rotating things on :hover over the #geminianimation container. This accounts for the first and sixth phases of the animation we outlined earlier.
    #geminianimation { &::before { /* Existing styles */ transition: 1s ease-in-out; } &:hover { &::before { transform: rotate(180deg); background-color: #FAFBFE; } } } Animating Phases 2 and 5
    We can do something similar for the second and fifth phases of the animation since they are mirror reflections of each other. Remember, in these phases, we’re spreading and contracting the fancy blue shape.
    We can start by shrinking the inner div’s scale to zero initially, then expand it back to its original size (scale: 1) on :hover (again using transitions):
    #geminianimation { div { scale: 0; transition: 1s ease-in-out; } &:hover { div { scale: 1; } } CodePen Embed Fallback Animating Phase 3
    Now, we very well could tackle this with a transition like we did the last two sets, but we probably should not do it… that is, unless you want to weep bitter tears and curse the day you first heard of CSS… not that I know from personal experience or anything… ha ha… ha.
    CSS keyframes are a better fit here because there are multiple states to animate between that would require defining and orchestrating several different transitions. Keyframes are more adept at tackling multi-step animations.
    What we’re basically doing is animating between different shapes that we’ve already defined as CSS variables that clip the shapes. The browser will handle interpolating between the shapes, so all we need is to tell CSS which shape we want clipped at each phase (or “section”) of this set of keyframes:
    @keyframes shapeshift { 0% { clip-path: var(--circle); } 25% { clip-path: var(--flower); } 50% { clip-path: var(--cylinder); } 75% { clip-path: var(--hexagon); } 100% { clip-path: var(--circle); } } Yes, we could combine the first and last keyframes (0% and 100%) into a single step, but we’ll need them separated in a second because we also want to animate the rotation at the same time. We’ll set the initial rotation to 0turn and the final rotation 1turn so that it can keep spinning smoothly as long as the animation is continuing:
    @keyframes shapeshift { 0% { clip-path: var(--circle); rotate: 0turn; } 25% { clip-path: var(--flower); } 50% { clip-path: var(--cylinder); } 75% { clip-path: var(--hexagon); } 100% { clip-path: var(--circle); rotate: 1turn; } } Note: Yes, turn is indeed a CSS unit, albeit one that often goes overlooked.
    We want the animation to be smooth as it interpolates between shapes. So, I’m setting the animation’s timing function with ease-in-out. Unfortunately, this will also slow down the rotation as it starts and ends. However, because we’re both beginning and ending with the circle shape, the fact that the rotation slows coming out of 0% and slows again as it heads into 100% is not noticeable — a circle looks like a circle no matter its rotation. If we were ending with a different shape, the easing would be visible and I would use two separate sets of keyframes — one for the shape-shift and one for the rotation — and call them both on the #geminianimation child div .
    #geminianimation:hover { div { animation: shapeshift 5s ease-in-out infinite forwards; } } Animating Phase 4
    That said, we still do need one more set of keyframes, specifically for changing the shape’s color. Remember how we set a linear gradient on the parent container’s ::after pseudo, then we also increased the pseudo’s width and height? Here’s that bit of code again:
    #geminianimation div { width: 100%; height: 100%; clip-path: var(--flower); &::after { content: ""; background: linear-gradient(135deg, #217bfe, #078efb, #ac87eb, #217bfe); width: 400%; height: 400%; position: absolute; } } The gradient is that large because we’re only showing part of it at a time. And that means we can translate the gradient’s position to move the gradient at certain keyframes. 400% can be nicely divided into quarters, so we can move the gradient by, say, three-quarters of its size. Since its parent, the #geminianimation div, is already spinning, we don’t need any fancy movements to make it feel like the color is coming from different directions. We just translate it linearly and the spin adds some variability to what direction the color wipe comes from.
    @keyframes gradientMove { 0% { translate: 0 0; } 100% { translate: -75% -75%; } } One final refinement
    Instead of using the flower as the default shape, let’s change it to circle. This smooths things out when the hover interaction causes the animation to stop and return to its initial position.
    And there you have it:
    CodePen Embed Fallback Wrapping up
    We did it! Is this exactly how Google accomplished the same thing? Probably not. In all honesty, I never inspected the animation code because I wanted to approach it from a clean slate and figure out how I would do it purely in CSS.
    That’s the fun thing about a challenge like this: there are different ways to accomplish the same thing (or something similar), and your way of doing it is likely to be different than mine. It’s fun to see a variety of approaches.
    Which leads me to ask: How would you have approached the Gemini button animation? What considerations would you take into account that maybe I haven’t?
    Recreating Gmail’s Google Gemini Animation originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  6. by: Abhishek Prakash
    Fri, 26 Sep 2025 13:14:55 +0530

    As I promised in the previous newsletter, we have published the Git for DevOps course. Unlike most of our courses, this one has more theory than hands-on. We did that deliberately, as many Git courses out there only explain commands, not the underlying concepts. And that creates a knowledge gap.
    Git for DevOpsGain the right knowledge. Learn the core principles of Git instead of jumping straight into git commands.Linux HandbookMead NajiThis course is for beginners and details the absolute essential concepts of git. Later, we plan to create another course that focuses more on advanced topics with a focus on real world use cases and best practices.
    Next week we should see some chapters of 'Automation with Systemd' course.
     
     
      This post is for subscribers only
    Subscribe now Already have an account? Sign in
  7. Chapter 8: Branching

    by: Mead Naji
    Thu, 25 Sep 2025 16:01:24 +0530

    This lesson is for paying subscribers only This post is for paying subscribers only
    Subscribe now Already have an account? Sign in
  8. Chapter 2: Git Architecture

    by: Sreenath V
    Thu, 25 Sep 2025 14:49:24 +0530

    When you build any project, the first question you need to answer is what you need as requirements, or what you want to build to solve the problem that you have.
    Suppose I want to rebuild Git from scratch. I will not go to the developer and say "build Git," and the developer will start coding it.
    Of course not. You will first sit with the developer and start explaining what problem you want to solve, or what are the things you do manually that take your time and you want to automate, what are the important features that must be in the system, and what are the optional features that would be nice to have.
    In our case, let us start explaining what we need Git to do.
    Suppose I am working on a project inside a folder or a directory just like this:
    Sample Working DirectoryWe will start using some Git terms. Instead of "folder" or "directory," we use the term working directory, which means the same thing. In Git, we often call it the working tree, which also refers to the folder that contains the project files.
    After some time, I added some changes to my project. So it is not the same version as before. It will be, for example, version 1.1:
    A diagram showing the working directory with changes, called version 1.1 of project.Nothing happened. We just modified and added some code to the project. We still see the same thing.
    We do not see Git working now. But under the hood, Git will do this:
    A diagram showing under the hood happenings in Git.Under the hood, it will create another directory. We call it the Git repository, or in short, Git repo, in which it will save the state of the project. It will contain the version v of the project and version 1.1, who made the changes, when they made them, and what was changed. All of this will be saved in that repo.
    But in our project folder, we only see the same files. Nothing changed, because all the work is saved and handled in the Git repo.
    So now, as a requirement, you need Git to track the files. By "track," we mean track the changes, what is added or removed in all the files and folders inside our project. That includes file names and the code inside, everything.
    We also need this to be portable and to work on any operating system. If I work on Linux and switch to Windows, for example, it should still work. So it needs to be system independent.
    I also need everything Git tracks to have a unique ID. We said that Git tracks files and folders, so everything should have an ID.
    📋We call files and folders and stuff like that in Git “objects”.I also need to track the history. By "history" we mean all the versions that we have, what changed from version to version, who made the changes, and if a developer makes changes and later undoes them and goes back to a previous version, I also need to know all of that.
    💡We call this "log" in Git.The last thing is we need to track files without changing anything in them. Track the file as it is. Do not do any modification to it.
    Now that we have these base requirements, the developer can take them and start working to build Git.
    Now the question is how can we start applying and doing this tracking? The first thing we should do is change the format of tracking files.
    Let us take an example like this:
    A diagram showing working directory and git repositoryNow if we take this example where we track files by their names — OK — but what will happen if I delete the file b.txt and I make another file with the same name b.txt but with different content?
    In this situation, is it a new file or is it the same file as before? What exactly happened? And do we also need to track the file names?
    We cannot track files by their names only. The names themselves also need to be tracked. That is not logical, right?
    For this, files and folders should be transformed into objects inside the Git repo.
    Objects in Git are not only files and folders. There are some other things too:
    Git ObjectsWe said that objects track files and folders. So where are they?
    Actually, the names are just changed, magic names, right? 🥸
    blob
    This fancy name actually means a file. If it means a file, why not just write "file" instead of this crazy name?
    The answer is that we do not just track the content of the file. We also track the metadata.
    💡What does metadata mean?
    Metadata means the type of the file, the size, the date, and so on. Metadata and content together give us a blob 🤗tree
    Basically, this means a folder 😡
    Again, why not just use the simple and understandable name "folder" instead of "tree"? 🤔
    Again, because it contains the metadata of the folder, such as size, types of files and folders inside it, and so on. All of this makes a tree.
    Git objects. files and contentsThe other types (commits and tags) will be explained later, and we will understand what they do also.
    We call folders and files objects because they contain not only the folder and file data but also metadata about them, not just regular files and folders. I hope you understand the difference.
    Now, we’ve solved the problem of tracking, right?
    💡We no longer track files and folders in the traditional way. Instead, we treat them as objects that contain both content and metadata.OS Independence
    Let’s move on to the second requirement: OS independence.
    How can we achieve this?
    Git has something we call a folder structure. This structure contains all the information about the repository, and the best part is it works on any operating system. It's just a folder.
    Most of the content in this folder is simple files. The encryption is straightforward, and all the files are compressed to make them faster to read.
    You get the point: a Git repository is just a simple folder. It doesn’t contain any databases or complex systems, just plain files.
    Portability
    Another useful feature of this structure is its portability.
    If you take this folder to a Windows system, for example, you’ll still see all the modifications that were made on Linux. This makes Git highly portable.
    Now, you might be wondering:
    Great question!
    Remember when we said that we have the working directory and the Git repository? What Git actually does is create a special folder for itself inside the project directory.
    Working directory🤷 So, whenever you copy your project folder, you’re also copying the Git repository with it!
    This hidden folder is called .git. The dot at the beginning means it’s hidden by default.
    Typically, the folder that contains both your project files and the Git metadata is called a repository, or repo for short.
    At this point, we’ve understood how Git achieves portability and OS independence.
    unique ID
    The third thing is the unique ID and track history. Now, we use blobs and trees. We do not use names as identifiers anymore, so we need another way to identify which blob belongs to which file, or which tree belongs to which folder, and so on.
    For this, there are several ways to implement it. We can use, for example, a hash function that generates a hash for everything, a file, a folder, or even a single character. This hash function can generate a unique hash code for each, and it will be unique depending on the file. It should also be deterministic. This means if I pass the word "git" to this function, it should generate a unique hash. And if I pass the same word "git" again later, it should generate the same hash, not a different one. This is very important.
    The question is: does this already exist, or should we build one from scratch for Git?
    Yes, there are a bunch of hashing algorithms that do the same job, like SHA (Secure Hash Algorithm), which has different types such as SHA-1 and SHA-2. The difference lies in how many characters are generated for the hash. There is also MD5, and others.
    Git uses SHA-1 to generate unique IDs for objects, and you can see this with a real example in your terminal. If you try to get the hash ID for the word "git", it will look like this:
    Hash of a wordIn this example, we print the word "Git" and pipe it (pass it) to git hash-object, which is responsible for generating unique IDs for objects in Git. It gives us a 40-character ID as a result.
    –stdin tells Git "Don't read from a file, instead, read the data coming through standard input(echo)".
    Now let’s go a bit deeper. We said that Git uses SHA-1, and there is also a tool in Linux called shasum that uses the same algorithm to hash files. Let’s try it with the same word "Git":
    Get SHASUMHmm… wait. You said that git hash-object generates a deterministic hash and that shasum uses SHA-1 just like Git. So why is the output for the same word different? 🤔
    Great question and observation. This is true, but the answer is that Git does not just take the content of the file. It also takes metadata and some other things into consideration. If you remember, we said that earlier. Yes, but this is just a single statement, not something inside a file, right?
    Actually, it is inside a file. And Git takes this into account along with three other things:
    The type of the file The size of the file And a null character at the end So if you give Git the word "Git", it will calculate all this and add it to the hash ID. It does not just take the word as it is. That is why the result is not the same.
    Now, with this in mind, if we take the same word "Git", Git sees it like this:
    echo "blob 4\0Git" blob is the type 4 is the size (you see only three characters, but there is an escape character for the new line at the end) followed by the escape character \0 and then the word itself All of this is combined together. hash-object generates an ID for it.
    🚧You should pay attention: Every character, space, capital letter, or symbol makes a difference. The algorithm uses the binary format and ASCII codes, so each character has a different code.If you echo the word using echo, it will give us this result:
    Now, if we take the same final string that Git sees and pass it to shasum like this:
    Same final string pass to shasumIt will give the same output as the one generated using git hash-object.
    Shasum output sameThis is just a simple demonstration of how Git works and how it takes the metadata we already explained. We said that Git doesn't just track the content, it also tracks the metadata.
    Up to this point, we explained how Git tracks everything, how it is OS-independent, and how it generates a unique ID. The next step is to understand how Git tracks history.
    If you remember, we said that the unique ID (the generated hash) changes if we make any change to the file, whether it is the name or the content. Even if you just add a space to the file, the hash will change.
    What Git does is take the newly generated hash, compare it to the saved one from the last change, and check if they are the same. If they are the same, it means you did not make any changes to the file. If they are different, it means the file changed. Simple, right?
    Let's continue our journey exploring the Git architecture. Old VCS systems (some of them still exist and work) used something we call a two-tree architecture. What does this mean? It is just like we already explained. We have two folders: one for the project and the other one for the Git repository.
    Working directory and git repoWait, doesn't Git use the same architecture? No, actually Git uses a three-tree architecture, which means three folders (trees), like this:
    Git three folders tree archIn between the working directory and the Git repository, there is another tree called the staging area, or the index.
    I will tell you something cool. This staging area is actually a file inside the Git repository. If it is just a file, why do we call it a tree?
    It looks like a tree, but physically on the machine, it is just a file. And we are going to learn what it does in a moment.
    We place this tree between the repository and the working directory because it performs a task between the two. Suppose you are working on a file and you make changes to it. Instead of putting it directly in the Git repository, we put it in the staging area. In simple words, we prepare the file to send it to the Git repository.
    Git Staging areaSo basically, the staging area is where we place the file to get it ready to be saved in the Git repository. We call this operation of sending the changes to the Git repository a commit.
    Staging area and commitA commit is the operation of saving objects in the Git repository.
    Okay, so if that is the case, why do we need this staging area? Why not send the changes directly to the Git repository? It sounds useless, right?
    Well, it is not just that. There are other features of this staging area. Otherwise, Linus Torvalds would not have added it.
    Suppose you make some changes, then you revise them and decide to change something. Before sending it to the repository and having another version, you can make that change in the staging area, compare it to the last version, and then when you are confident, send it to the repository. This helps avoid having plenty of versions. You get only one good version.
    Another useful thing the staging area provides is the ability to group multiple changes into one commit. Let’s take a concrete example. Suppose you are working on a web project and you receive an order to change the name of the website. The name appears in different places in the website. What you will do in this situation is change the name in every place, and when you complete the modification of all your files, you send them to the Git repository in one commit.
    Multiple staging and commit.So here we have multiple changes in the staging area for the same thing, the name, but in the commit it is a single commit. One version in the Git repository instead of multiple versions.
    You get the benefit now. Instead of sending the changes directly to the Git repository and getting a new version each time, we make all the changes we need in the staging area, then send them all at once.
    This area is also useful in troubleshooting. You can compare versions and see the difference before the changes are in the Git repository.
    Another thing you should know is that the working tree is not always just a folder on your machine where you can do whatever you want. Sometimes, it can be the server or an online project for the client. So it is not always an open area to play. You need a place to verify and think before making changes online.
    If you do not like using the staging area in Git, you can combine both steps, staging and committing, into one command.
    Okay, that is understandable. But in the beginning you said the word "index" for the staging area. What is that?
    Yes, the staging area is also called the index. Let’s take an example to understand this point and another important one.
    Let’s say you are just starting a project and you create one file like this:
    New project fileIn this case, you have the Git repository, but it is empty. It does not contain anything at the moment. Now that you have a file in your project, Git cannot track it yet, so this file is untracked.
    Untracked file in new projectThe first thing we need to do is make the file tracked so that Git can see and track it. Git is sad because it is not tracking the file.
    The area that turns the file from untracked to tracked is the staging area. What happens first is that the file gets hashed and assigned a unique ID.
    The command that makes the file tracked in Git is git add. When the file is tracked, its unique ID is registered in the index, which is the staging area file. I told you that the staging area is just a file.
    Using git add command adds to staging area with an id.Simple, right? The file gets hashed in the staging area and all is set. What happens next?
    Git creates a blob (a file) with the same SHA-1 created in the staging area. So now Git can track it and create version one of the file, right?
    The answer is no, not yet. The version is created only after you do the next step, which is git commit. Now we have version one, and this is done using the command git commit.
    Commit after stage in GitSo after the git commit, Git creates the first snapshot (let’s say a copy of the project) for the project. So now you get the idea: git add is sending the file to the staging area (staging files), and git commit creates the first snapshot of the project.
    If you are just starting the project and you create a new file, the git add will change the file from untracked to tracked.
    If the file you are working on already exists and has many versions already in the Git repo, it will be modified, and this brings us to another concept in Git called file state.
    Git, inside the Git repo, classifies files into two categories: either the file is tracked or untracked.
    The untracked files, as we explained, are the files that are added from the staging area to the repo. Git sees them but cannot track them until we make a commit.
    The other type, tracked files, can be modified. This means the last commit of that file in the Git repo is not the same as the one you are working on in the working tree. You changed something, added, or removed anything, and Git categorizes that file as a modified file.
    📋Read this again to understand what we mean by modified (the file in the working tree differs from the one in the Git repo).The other type of tracked file in Git is unmodified. This means the version in the Git repo is the same as in the working tree, with no modification.
    Git repo, type of filesThese are the states of the files inside the Git repo. We use U as a short form of the word Untracked and M to refer to files that are tracked and modified.
    You are going to see this in action when we start working on the command line and apply what we learn here.
    This is basically the Git architecture in a short and simple way. I hope you understand how Git works.
    What we explained here is the fundamental architecture. There are some extra things that we will learn later, but this is necessary to understand in order to get the full point of how Git works behind the scenes.
  9. Git for DevOps

    by: Mead Naji
    Thu, 25 Sep 2025 14:33:55 +0530

    As a system admin, a DevOps engineer, or even a developer, working with a version control system, or in short VCS, is not a choice. You need to master how to work with one of the available VCS tools in order to be able to collaborate and work on a team for a project.
    A VCS is a way to record changes to single or multiple files over time and lets you control and retrieve a specific version later when you need it.
    Suppose you are working on an application, and every week you have to make new changes and features. With a version control system, you always have a working version of your project, and every time something crashes or does not work, you have a tracking system that records all changes made to the application. You can compare changes over time, go back to a specific version of a single file or the entire application, and so on.
    Right now, there are a lot of VCS and repository managers available to use, and one of the most popular ones is Git and GitHub (repository manager).
    What is Git
    Git is one of the most popular version control systems, created in 2005 by the creator of the Linux kernel, Linus Torvalds. Back then, he created it as a project for the Linux kernel, but it ended up being the most popular system out there.
    Git is a distributed (you will know what this means later) version control system. It is free and open source, designed for small to large projects with high speed and efficiency.
    Compared to other version control systems, Git is different in how it works and deals with data. Later in the series, you will get a deep understanding of how Git works and how it differs from other VCS tools.
    Why Learn Git
    Git vs other VCSWhatever position you want, whether a developer or a sysadmin, Git is one of the most important tools that you definitely need to master, and you will need it in any work you do.
    If you want to become a successful DevOps engineer or sysadmin, then you need to master Git better than a developer does, as you may need to perform extra tasks in your daily job with Git and GitHub. So mastering Git is obligatory, not optional.
    Git as a system is simple and efficient. You just need the right explanation and a deep look into the architecture to understand what each component does exactly.
    What You Should Learn From This Series
    This series will make you an expert in using Git, as it gives a deep understanding starting from how Git is built and all its components.
    The problem that everyone faces when they want to learn Git is jumping straight into applying commands, which are simple to do, but like that, you don’t gain the right knowledge of how it all works.
    Git has around 145 commands, but you'll end up using the most essential ones, which can be grouped into about 20 core commands. The real knowledge lies in understanding how Git works behind the scenes to gain a full understanding of the Git system.
    A lot of the available resources out there only explain commands, not the underlying concepts. This series will teach you everything about Git, starting from the initialization of a project folder to the advanced layers of Git.
  10. by: Abhishek Prakash
    Thu, 25 Sep 2025 04:40:13 GMT

    There were two smartphone launches recently, both with hardware kill switches. One is the Murena-powered HIROH Phone, and the other is the Furi Labs FLX1s. FLX1s uses a Debian based operating system.
    Now, these are not necessarily for everyone, and they sure are not cheap. I mean, they might not be as expensive as iPhones or Samsung Galaxy S series, but they are surely in the mid-range.
    These are more suited for journalists and activists who have to protect sensitive data and hence the kill switch. That doesn't mean a privacy aware regular Joe (or Jane) cannot opt for them. It's just that lack of some mainstream features could cause frustration. What do you think?
    💬 Let's see what you get in this edition:
    Apt receiving a much-needed upgrade. Lots happening in the open source space. An early look at LMDE 7 and Zorin OS 18. And other Linux news, tips, and, of course, memes! 📰 Linux and Open Source News
    OBS Studio 32.0 introduces a new plugin manager. Apt is finally getting support for a history command. The eBPF Foundation has awarded $100K in research grants. Git 3.0 might make Rust mandatory, though this is not yet final. LMDE 7 beta is here with a Debian 13 base and lots of new bits. Zorin OS 18 beta is here with a fresh design and many new features. A new proposal has been floated to make Linux multi-kernel friendly. New Proposal Looks to Make Linux Multi-Kernel FriendlyIf approved, Linux could one day run multiple kernels simultaneously.It's FOSS NewsSourav Rudra🧠 What We’re Thinking About
    A coalition of open source organizations has called out predatory practices.
    Open Source Infrastructure is Breaking Down Due to Corporate FreeloadingAn unprecedented threat looms over open source.It's FOSS NewsSourav RudraIf you are around South Korea, then you should definitely attend this year's Open Source Summit Korea!
    🧮 Linux Tips, Tutorials, and Learnings
    Learn how to make the best out of Polybar in Xfce. If you have ever wondered what an immutable distro is, then we have got you covered. These distros and tools offer Hyprland preconfigured. Hyprland Made Easy: Preconfigured Beautiful DistrosHere are the projects that lower the entry barrier by providing a preconfigured Hyprland option.It's FOSSSourav Rudra👷 AI, Homelab and Hardware Corner
    Cool down your Raspberry Pi in style with these mini PC cases.
    Raspberry Pi 5 Tower Cases to Give it Desktop Gaming Rig LookPi 5 is a remarkable device and it deserves an awesome case. Transform your Raspberry Pi 5 into a miniature desktop tower PC with these cases.It's FOSSAbhishek PrakashAlso explore some must know Ollama commands to manage local AI models.
    ✨ Project Highlight
    Net Commander is a new project from Elelab that brings network troubleshooting, Wi-Fi surveys, SSH jumping, CIDR calculations, and more into VS Code.
    The author had reached out to us, but we haven't tested the plugin extensively yet.
    GitHub - elelabdev/net-commander: Net Commander supercharges Visual Studio Code for Network Engineers, DevOps Engineers and Solution Architects streamlining everyday workflows and accelerating data-driven root-cause analysis.Net Commander supercharges Visual Studio Code for Network Engineers, DevOps Engineers and Solution Architects streamlining everyday workflows and accelerating data-driven root-cause analysis. - ele…GitHubelelabdev📽️ Videos I Am Creating for You
    Explore DuckDuckGo's lesser known features in our latest video.
    Subscribe to It's FOSS YouTube Channel Desktop Linux is mostly neglected by the industry but loved by the community. For the past 13 years, It's FOSS has been helping people use Linux on their personal computers. And we are now facing the existential threat from AI models stealing our content.
    If you like what we do and would love to support our work, please become It's FOSS Plus member. It costs $24 a year (less than the cost of a McDonald's burger a month), and you get an ad-free reading experience with the satisfaction of helping the desktop Linux community.
    Join It's FOSS Plus 💡 Quick Handy Tip
    In GNOME's Nautilus file manager, you can drag and drop a tab from one window to another Nautilus window, just like browsers. Or, drag it out to open it as a new window.
    See below to learn how. 👇
    🎋 Fun in the FOSSverse
    🧩 Quiz Time: Open source is full of forks; can you match the projects with their community-based forks/alternatives?
    Community Strikes Back [Puzzle]Forked it!It's FOSSAbhishek Prakash🤣 Meme of the Week: The contempt is real, people. ☠️
    🗓️ Tech Trivia: On September 22, 1986, a U.S. federal judge ruled that computer code could be copyrighted, giving software the same legal protections as books and other written works.
    🧑‍🤝‍🧑 From the Community: One of our regular FOSSers has a question about terminals. Can you help?
    Terminal: What app do you to see a .log file through pagination and with colors?Hello Friends In a Terminal: What app do you to see a .log file through pagination and with colors? I did do a quick research in the web and I found https://lnav.org/ (not tested yet) But just being curious if you have your own recommendation. It to be used with https://logback.qos.ch where is used the following Logger Levels: trace,debug,info,warn,error If I use Visual Studio Code for long files (20MB-50MB) it consumes ram as a wolf, it even worst for many .log files opened at the same tim…It's FOSS CommunityManuel_Jordan❤️ With love
    Please share it with your Linux-using friends and encourage them to subscribe (hint: it's here).
    Share the articles in Linux Subreddits and community forums.
    Follow us on Google News and stay updated in your News feed.
    Opt for It's FOSS Plus membership and support us 🙏
    Enjoy FOSS 😄
  11. CSS Typed Arithmetic

    by: Amit Sheen
    Wed, 24 Sep 2025 12:49:22 +0000

    CSS typed arithmetic is genuinely exciting! It opens the door to new kinds of layout composition and animation logic we could only hack before. The first time I published something that leaned on typed arithmetic was in this animation:
    CodePen Embed Fallback But before we dive into what is happening in there, let’s pause and get clear on what typed arithmetic actually is and why it matters for CSS.
    Browser Support: The CSS feature discussed in this article, typed arithmetic, is on the cutting edge. As of the time of writing, browser support is very limited and experimental. To ensure all readers can understand the concepts, the examples throughout this article are accompanied by videos and images, demonstrating the results for those whose browsers do not yet support this functionality. Please check resources like MDN or Can I Use for the latest support status.
    The Types
    If you really want to get what a “type” is in CSS, think about TypeScript. Now forget about TypeScript. This is a CSS article, where semantics actually matter.
    In CSS, a type describes the unit space a value lives in, and is called a data-type. Every CSS value belongs to a specific type, and each CSS property and function only accepts the data type (or types) it expects.
    Properties like opacity or scale use a plain <number> with no units. width, height, other box metrics, and many additional properties use <length> units like px, rem, cm, etc. Functions like rotate() or conic-gradient() use an <angle> with deg, rad, or turn. animation and transition use <time> for their duration in seconds (s) or milliseconds (ms). Note: You can identify CSS data types in the specs, on MDN, and other official references by their angle brackets: <data-type>.
    There are many more data types like <percentage>, <frequency>, and <resolution>, but the types mentioned above cover most of our daily use cases and are all we will need for our discussion today. The mathematical concept remains the same for (almost) all types.
    I say “almost” all types for one reason: not every data type is calculable. For instance, types like <color>, <string>, or <image> cannot be used in mathematical operations. An expression like "foo" * red would be meaningless. So, when we discuss mathematics in general, and typed arithmetic in particular, it is crucial to use types that are inherently calculable, like <length>, <angle>, or <number>.
    The Rules of Typed Arithmetic
    Even when we use calculable data types, there are still limitations and important rules to keep in mind when performing mathematical operations on them.
    Addition and Subtraction
    Sadly, a mix-and-match approach doesn’t really work here. Expressions like calc(3em + 45deg) or calc(6s - 3px) will not produce a logical result. When adding or subtracting, you must stick to the same data type.
    Of course, you can add and subtract different units within the same type, like calc(4em + 20px) or calc(300deg - 1rad).
    Multiplication
    With multiplication, you can only multiply by a plain <number> type. For example: calc(3px * 7), calc(10deg * 6), or calc(40ms * 4). The result will always adopt the type and unit of the first value, with the new value being the product of the multiplication.
    But why can you only multiply by a number? If we tried something like calc(10px * 10px) and assumed it followed “regular” math, we would expect a result of 100px². However, there are no squared pixels in CSS, and certainly no square degrees (though that could be interesting…). Because such a result is invalid, CSS only permits multiplying typed values by unitless numbers.
    Division
    Here, too, mixing and matching incompatible types is not allowed, and you can divide by a number just as you can multiply a number. But what happens when you divide a type by the same type?
    Hint: this is where things get interesting.
    Again, if we were thinking in terms of regular math, we would expect the units to cancel each other out, leaving only the calculated value. For example, 90x / 6x = 15. In CSS, however, this isn’t the case. Sorry, it wasn’t the case.
    Previously, an expression like calc(70px / 10px) would have been invalid. But starting with Chrome 140 (and hopefully soon in all other browsers), this expression now returns a valid number, which winds up being 7 in this case. This is the major change that typed arithmetic enables.
    Is that all?!
    That little division? Is that the big thing I called “genuinely exciting”? Yes! Because this one little feature opens the door to a world of creative possibilities. Case in point: we can convert values from one data type to another and mathematically condition values of one type based on another, just like in the swirl example I demoed at the top.
    So, to understand what is happening there, let’s look at a more simplified swirl:
    CodePen Embed Fallback I have a container<div> with 36 <i> elements in the markup that are arranged in a spiral with CSS. Each element has an angle relative to the center point, rotate(var(--angle)), and a distance from that center point, translateX(var(--distance)).
    The angle calculation is quite direct. I take the index of each <i> element using sibling-index() and multiply it by 10deg. So, the first element with an index of 1 will be rotated by 10 degrees (1 * 10deg), the second by 20 degrees (2 * 10deg), the third by 30 degrees (3 * 10deg), and so on.
    i { --angle: calc(sibling-index() * 10deg); } As for the distance, I want it to be directly proportional to the angle. I first use typed arithmetic to divide the angle by 360 degrees: var(--angle) / 360deg.
    This returns the angle’s value, but as a unitless number, which I can then use anywhere. In this case, I can multiply it by a <length> value (e.g. 180px) that determines the element’s distance from the center point.
    i { --angle: calc(sibling-index() * 10deg); --distance: calc(var(--angle) / 360deg * 180px); } This way, the ratio between the angle and the distance remains constant. Even if we set the angle of each element differently, or to a new value, the elements will still align on the same spiral.
    The Importance of the Divisor’s Unit
    It’s important to clarify that when using typed arithmetic this way, you get a unitless number, but its value is relative to the unit of the divisor.
    In our simplified spiral, we divided the angle by 360deg. The resulting unitless number, therefore, represents the value in degrees. If we had divided by 1turn instead, the result would be completely different — even though 1turn is equivalent to 360deg, the resulting unitless number would represent the value in turns.
    A clearer example can be seen with <length> values.
    Let’s say we are working with a screen width of 1080px. If we divide the screen width (100vw) by 1px, we get the number of pixels that fit into the screen width, which is, of course, 1080.
    calc(100vw / 1px) /* 1080 */ However, if we divide that same width by 1em (and assume a font size of 16px), we get the number of em units that fit across the screen.
    calc(100vw / 1em) /* 67.5 */ The resulting number is unitless in both cases, but its meaning is entirely dependent on the unit of the value we divided by.
    From Length to Angle
    Of course, this conversion doesn’t have to be from a type <angle> to a type <length>. Here is an example that calculates an element’s angle based on the screen width (100vw), creating a new and unusual kind of responsiveness.
    CodePen Embed Fallback And get this: There are no media queries in here! it’s all happening in a single line of CSS doing the calculations.
    To determine the angle, I first define the width range I want to work within. clamp(300px, 100vw, 700px) gives me a closed range of 400px, from 300px to 700px. I then subtract 700px from this range, which gives me a new range, from -400px to 0px.
    Using typed arithmetic, I then divide this range by 400px, which gives me a normalized, unitless number between -1 and 0. And finally, I convert this number into an <angle> by multiplying it by -90deg.
    Here’s what that looks like in CSS when we put it all together:
    p { rotate: calc(((clamp(300px, 100vw, 700px) - 700px) / 400px) * -90deg); } From Length to Opacity
    Of course, the resulting unitless number can be used as-is in any property that accepts a <number> data type, such as opacity. What if I want to determine the font’s opacity based on its size, making smaller fonts more opaque and therefore clearer? Is it possible? Absolutely.
    CodePen Embed Fallback In this example, I am setting a different font-size value for each <p> element using a --font-size custom property. and since the range of this variable is from 0.8rem to 2rem, I first subtract 0.8rem from it to create a new range of 0 to 1.2rem.
    I could divide this range by 1.2rem to get a normalized, unitless value between 0 and 1. However, because I don’t want the text to become fully transparent, I divide it by twice that amount (2.4rem). This gives me a result between 0 and 0.5, which I then subtract from the maximum opacity of 1.
    p { font-size: var(--font-size, 1rem); opacity: calc(1 - (var(--font-size, 1rem) - 0.8rem) / 2.4rem); } Notice that I am displaying the font size in pixel units even though the size is defined in rem units. I simply use typed arithmetic to divide the font size by 1px, which gives me the size in pixels as a unitless value. I then inject this value into the content of the the paragraph’s ::after pseudo-element.
    p::after { counter-reset: px calc(var(--font-size, 1rem) / 1px); content: counter(px) 'px'; } Dynamic Width Colors
    Of course, the real beauty of using native CSS math functions, compared to other approaches, is that everything happens dynamically at runtime. Here, for example, is a small demo where I color the element’s background relative to its rendered width.
    p { --hue: calc(100cqi / 1px); background-color: hsl(var(--hue, 0) 75% 25%); } You can drag the bottom-right corner of the element to see how the color changes in real-time.
    CodePen Embed Fallback Here’s something neat about this demo: because the element’s default width is 50% of the screen width and the color is directly proportional to that width, it’s possible that the element will initially appear in completely different colors on different devices with different screens. Again, this is all happening without any media queries or JavaScript.
    An Extreme Example: Chaining Conversions
    OK, so we’ve established that typed arithmetic is cool and opens up new and exciting possibilities. Before we put a bow on this, I wanted to pit this concept against a more extreme example. I tried to imagine what would happen if we took a <length> type, converted it to a <number> type, then to an <angle> type, back to a <number> type, and, from there, back to a <length> type.
    Phew!
    I couldn’t find a real-world use case for such a chain, but I did wonder what would happen if we were to animate an element’s width and use that width to determine the height of something else. All the calculations might not be necessary (maybe?), but I think I found something that looks pretty cool.
    CodePen Embed Fallback In this demo, the animation is on the solid line along the bottom. The vertical position of the ball, i.e. its height, relative to the line, is proportional to the line’s width. So, as the line expands and contracts, so does the path of the bouncing ball.
    To create the parabolic arc that the ball moves along, I take the element’s width (100cqi) and, using typed arithmetic, divide it by 300px to get a unitless number between 0 and 1. I multiply that by 180deg to get an angle that I use in a sin() function (Juan Diego has a great article on this), which returns another unitless number between 0 and 1, but with a parabolic distribution of values.
    Finally, I multiply this number by -200px, which outputs the ball’s vertical position relative to the line.
    .ball { --translateY: calc(sin(calc(100cqi / 300px) * 180deg) * -200px) ; translate: -50% var(--translateY, 0); } And again, because the ball’s position is relative to the line’s width, the ball’s position will remain on the same arc, no matter how we define that width.
    Wrapping Up: The Dawn of Computational CSS
    The ability to divide one typed value by another to produce a unitless number might seem like no big deal; more like a minor footnote in the grand history of CSS.
    But as we’ve seen, this single feature is a quiet revolution. It dismantles the long-standing walls between different CSS data types, transforming them from isolated silos into a connected, interoperable system. We’ve moved beyond simple calculations, and entered the era of true Computational CSS.
    This isn’t just about finding new ways to style a button or animate a loading spinner. It represents a fundamental shift in our mental model. We are no longer merely declaring static styles, but rather defining dynamic, mathematical relationships between properties. The width of an element can now intrinsically know about its color, an angle can dictate a distance, and a font’s size can determine its own visibility.
    This is CSS becoming self-aware, capable of creating complex behaviors and responsive designs that adapt with a precision and elegance that previously required JavaScript.
    So, the next time you find yourself reaching for JavaScript to bridge a gap between two CSS properties, pause for a moment. Ask yourself if there’s a mathematical relationship you can define instead. You might be surprised at how far you can go with just a few lines of CSS.
    The Future is Calculable
    The examples in this article are just the first steps into a much larger world. What happens when we start mixing these techniques with scroll-driven animations, view transitions, and other modern CSS features? The potential for creating intricate data visualizations, generative art, and truly fluid user interfaces, all natively in CSS, is immense. We are being handed a new set of creative tools, and the instruction manual is still being written.
    CSS Typed Arithmetic originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  12. by: Umair Khurshid
    Wed, 24 Sep 2025 16:57:20 +0530

    Unlock the full potential of your Linux system by replacing the classic cron jobs with modern, powerful systemd automation. Learn how to schedule, monitor, sandbox, and optimize automated workflows like a pro, all while leveraging the same tools used by your Linux system itself.
    Why systemd instead of cron?
    Cron has been around for decades, but it’s limited. It can’t monitor dependencies, doesn’t integrate with system logging, and has no native way to handle failures gracefully.
    systemd is the future of automation on Linux. It’s not just a service manager, it’s a complete automation framework that lets you:
    Schedule tasks with precision using timers Automate complex, dependent workflows Sandbox risky jobs for security Monitor, debug, and optimize jobs like a sysadmin ninja If you’re ready to ditch cron and take advantage of systemd’s power, this course is for you.
    What will you learn?
    Module 1. Timers and Automated Task Scheduling: Forget crontab -e. Learn how to build systemd timers for recurring and one-off jobs, with logging and error handling built in.
    Module 2. Automating Complex Workflows with Targets: Master targets to run multi-service workflows in the right order, ensuring dependencies are respected.
    Module 3. systemd-nspawn and machinectl for Repeatable Environments: Create reproducible containers with systemd-nspawn for development, testing, and automation pipelines.
    Module 4. Automated Resource Management: Leverage systemd’s cgroup integration to limit CPU, memory, and IO for your automated tasks.
    Module 5. Sandboxing Directives for Safer Automation: Use built-in sandboxing features to isolate automated jobs and protect your system from accidental damage.
    Module 6. Debugging and Monitoring Automated Services: Learn how to use journalctl, systemctl status, and other tools to troubleshoot like a pro.
    How to use this course?
    You will gain practical skills through hands-on exercises, and real-world scenarios.
    The best approach here would be to follow the instructions and commands on your Linux system installed in a virtual machine or a dedicated test machine.
    By the end, you'll have the knowledge and confidence to manage your Linux system more effectively using systemd.
  13. by: Chris Coyier
    Tue, 23 Sep 2025 17:33:00 +0000

    Chris & Marie jump on the podcast to talk about just how drastically customer support has changed over the last few years. We still exclusively do customer support over email. Incoming email from real customers who need a hand with something where they type out that email in plain languages themselves are few and far between. Instead we get an onslaught of noise from users that don’t exist about Pens and situations that don’t exist. The influence of agentic AI is massive here, some of it with nefarious intent and some not. All of it needs work to mitigate.
    Time Jumps
    00:07 How much support has changed in the last 2 years 01:12 How do we do support at CodePen in 2025 07:41 How much noise AI has added to support 14:02 Verifying accounts before they’re allowed to use support or CodePen 23:05 Some of the changes we’ve made to help deal with AI 29:50 The benefits of learning to code with AI
  14. by: Chris Coyier
    Mon, 22 Sep 2025 15:33:16 +0000

    Adam Argyle is clear with some 2025 CSS advice:
    Nobody asked me, but if I had to pick a favorite of Adam’s six, it’s all the stuff about animating <dialog>, popover, and <details>. There is a lot of interesting new-ish CSS stuff in there that will help you all around, between allow-discrete, overlay, ::backdrop, :popover-open, @starting-style, and more.
    /* enable transitions, allow-discrete, define timing */ [popover], dialog, ::backdrop { transition: display 1s allow-discrete, overlay 1s allow-discrete, opacity 1s; opacity: 0; } /* ON STAGE */ :popover-open, :popover-open::backdrop, [open], [open]::backdrop { opacity: 1; } /* OFF STAGE */ /* starting-style for pre-positioning (enter stage from here) */ @starting-style { :popover-open, :popover-open::backdrop, [open], [open]::backdrop { opacity: 0; } } Jeremy Keith also did a little post with CSS snippets in it, including a bit he overlaps with Adam on, where you by default opt-in to View Transitions, even if that’s all you do.
    @media (prefers-reduced-motion: no-preference) { @view-transition { navigation: auto; } } The idea is you get the cross-fade right way and then are set up to sprinkle in more cross-page animation when you’re ready.
    Una Kravets has a post about the very new @function stuff in CSS with a bunch of examples. I enjoyed this little snippet:
    /* Take up 1fr of space for the sidebar on screens smaller than 640px, and take up the --sidebar-width for larger screens. 20ch is the fallback. */ @function --layout-sidebar(--sidebar-width: 20ch) { result: 1fr; @media (width > 640px) { result: var(--sidebar-width) auto; } } .layout { display: grid; grid-template-columns: --layout-sidebar(); } I’m intrigued by the idea of being able to abstract away the logic in CSS when we want to. Perhaps making it more reusable and making the more declarative parts of CSS easier to read.
    Here’s another. I had absolutely no idea design control over the caret was coming to CSS (the thing in editable areas where you’re typing, that is usually a blinking vertical line). I guess I knew we had caret-color, which is prettttttty niche if you ask me. But now apparently we’re given control over the shape and literal animation of the caret.
    textarea { color: white; background: black; caret-shape: block; caret-animation: manual; animation: caret-block 2s step-end infinite; } @keyframes caret-block { 0% { caret-color: #00d2ff; } 50% { caret-color: #ffa6b9; } } Jump over to the Igalia blog post to see the video on that one.
    OK that’s all for this week. Er wait actually you gotta watch Julia Miocene’s Chicken video. Now I’m done.
  15. by: Sreenath
    Mon, 22 Sep 2025 11:33:58 GMT

    Most major Linux desktop environments like GNOME, KDE Plasma, and Xfce come with their own built-in panels for launching apps, switching workspaces, and keeping track of what’s happening on your system.
    Example of top panel in XfceOne of the best things about Linux is the freedom to customize, and there are plenty of alternatives out there if you want something more flexible or visually appealing for your panel.
    Polybar is a standout choice among these alternatives. It’s a fast, highly customizable status bar that not only looks great but is also easy to configure.
    If you’re running an X11-based setup, such as the i3 window manager or even Xfce, Polybar can really elevate the look of your desktop, help you keep essential info at your fingertips, and make better use of your screen space.
    Example of Polybar in XfceWe used Polybar in our Xfce customization video and this is from where we got the idea to do a detailed tutorial on it.
    Subscribe to It's FOSS YouTube ChannelIn this guide, we’ll build a sleek Polybar panel just like the one featured in our Xfce customization video above. Along the way, you’ll get a solid introduction to the basics of Polybar customization to help you tailor the panel to your own style and workflow.
    🚧This article is not trying to take over the place of Polybar Wiki. You can and should read the wiki while customizing Polybar. This article tries to act as a helper companion for beginners to get started.Installing Polybar
    💡Most tweaks here are done through the config file at user level. If you get easily overwhelmed and don't like to troubleshoot and fix much, you should probably create a new user account. Or, you could try these things in a fresh system on a VM or on a spare machine. This way, you won't impact your main system. Just a suggestion.Polybar is a popular project and is available in the official repositories of most major Linux distributions, including Ubuntu, Debian, Arch Linux, Fedora, etc.
    If you are a Debian/Ubuntu user, use:
    sudo apt install polybar For Arch Linux users,
    sudo pacman -S polybar In Fedora Linux, use the command:
    sudo dnf install polybar Once you install Polybar, you can actually use it with the default config by using the command:
    polybar Add it to the list of autostart applications to make the bar automatically start at system login.
    Initial configuration setups
    Let's say you don't want the default config and you want to start from scratch.
    First, make a directory called polybar in your ~/.config directory.
    mkdir -p ~/.config/polybarAnd then create a config file called config.ini for Polybar in this location.
    touch config.ini Now, you have an empty config file. It's time to 'code'.
    Config file structure
    Polybar config file has a structure that makes things works easier and cleaner.
    The whole config can be divided broadly intro four parts.
    Colors: Define the colors to use across polybar Bar: Define the properties of the whole bar. Modules: Individual bar modules are defined here. Scripts: This is not inside the config, but external shell and other scripts that enhance the Polybar functionality. Define the colors
    Let me share how I am customizing my desktop Linux with the awesome Polybar. This could work as a beginner's guide to understanding Polybar configuration.It is not convinient to write all the colors in hex code separately. While this is good during rough coding, it will create headaches later on, when you want to change colors in bulk.
    You can define a set of general colors in the beginning to make things easier.
    See an example here:
    [colors] background = #282A2E window-background = #DE282A2E background-alt = #373B41 border-color = #0027A1B9 foreground = #C5C8C6 primary = #88c0d0 secondary = #8ABEB7 alert = #A54242 disabled = #707880 aurora-blue = #27A1B9 aurora-orange = #FF9535 aurora-yellow = #FFFDBB aurora-green = #53E8D4 aurora-violet = #8921C2 nord-background = #4c566a The common definition syntax is explained above. Now, to refer to any color in the list, you can use:
    key = ${colors.colorvariable} For example, if you want to set the foreground color in a module, you will use:
    foreground = ${colors.foreground} 💡If you intend to change the entire color palette of the bar, all you have to do is create a new color palette and paste it in the config. No need to change individual colors of all modules and sub-items.Setting the bar
    In simple words, this is the panel appearing in the bar. The one that contains all other modules.
    Polybar allows you to have multiple bars. Perhaps that's the reason why it is called 'polybar'. These bars can be named separately in the config file, with their own set of modules.
    The bar is placed, defined with the syntax:
    [bar/<barname>] option = value option = value [bar/<barname2>] option = value option = value Let’s say I am creating a top bar and a bottom bar, my simple syntax will be:
    [bar/mytopbar] options = values [bar/mybottombar] options = value There will be plenty of options and values to use that you will see later in this tutorial.
    Now, if you want to open only the top bar, use:
    polybar mytopbar Configure the bar
    You have seen the general syntax of the bar that mentions options and values. Now, let’s see some options.
    I am giving you a code block below, and will explain with the help of that.
    monitor = HDMI-1 width = 100% height = 20pt radius = 5 fixed-center = true background = ${colors.window-background} foreground = ${colors.foreground} line-size = 3pt border-size = 2.5pt border-color = ${colors.border-color} padding-left = 0 padding-right = 0 module-margin = 1 separator = "|" separator-foreground = ${colors.disabled} font-0 = "JetBrains Mono:size=10;3" font-1 = monospace;2 font-2 = "FiraCode Nerd Font:size=11;2" font-3 = "Symbols Nerd Font:size=20;4" modules-left = mymenu ewmh modules-center = date temperature pacupdate modules-right = pulseaudio memory cpu eth magic-click sessionLogout enable-ipc = true The main options that you may take a closer look are:
    monitor: As the name suggests, this decides on which monitor you want the Polybar. Use the xrandr command to get the name of the display. If you are using a multi-monitor setup, you can define a second bar, placing it on the second monitor and so on. separator: This is the separator used to separate the modules appearing in Polybar. You can use any item here, including Nerd font items (given the nerd font is installed on the system.). font-n: These are fonts to be used in the bar. The number corresponding refers to fallback fonts. That is, if the one mentioned first is not available, the other is used. Take special care to the Nerd fonts we have set at font-2 and font-3. This will be explained in a later section. modules-left, modules-center, modules-right: Keys used to arrange the modules in the bar. Place the module names on any of this section, then that appears in that part of the bar. enable-ipc: Enable Inter-process communication. This allows scripts or external apps to send commands (like module updates or bar reloads) to Polybar in real time. The above-mentioned options are enough for a working bar. The rest are mostly self-explanatory. You can read more about other options and more help from the official wiki of Polybar.
    Modules
    Now that you have placed the bar, it's time to start adding the items.
    If you have looked at the above piece of script, you would have noticed that there are some entries in the modules-left, modules-center, and modules-right keys. They are mymenu ewmh, date temperature pacupdate, and pulseaudio memory cpu eth magic-click sessionLogout respectively.
    These are calling modules to the bar and placing them in the required position.
    In order to call them to the bar, they need to be defined; like what to display at that position. So, our next part is defining the modules.
    The general syntax for a module will be
    [module/MY_MODULE_NAME] type = MODULE_TYPE option1 = value1 option2 = value2 ... Here, MY_MODULE_NAME can be found on the Polybar Wiki, that explains modules. For example, refer to the CPU module wiki in Polybar.
    Getting Module NameThe type here will be:
    type = internal/cpu 🚧I will be using several modules here, that will create a fine panel for a beginner. You should read the wiki for more modules and customizations as required for your needs.Add Workspaces
    Workspaces is a great way to increase productivity by avoiding cluttered windows in front of you. In Polybar, we will be using the emwh module to get workspaces in the panel.
    Let's see a sample config:
    [module/ewmh] type = internal/xworkspaces icon-0 = 1; icon-1 = 2;󰚢 icon-2 = 3; icon-3 = 4; icon-4 = 5; icon-5 = 6; icon-6 = 7; icon-7 = 8; icon-8 = 9; icon-9 = 10; format = <label-state> format-font = 2 #group-by-monitor = false #pin-workspaces = false label-active = %icon% label-active-background = ${colors.background-alt} label-active-forground = #00000000 label-active-padding = 2 label-occupied = %icon% label-occupied-padding = 1 label-urgent = %icon% label-urgent-background = ${colors.primary} label-urgent-padding = 1 label-empty = %icon% label-empty-foreground = ${colors.disabled} label-empty-padding = 1 We have already seen what type is in the previous section.
    In workspaces, you should be able to see icons/numbers for each workspace. These icons are defined in the icon-n key. The n here corresponds to the workspace number.
    For desktops like Xfce, the number of workspaces available is managed by the desktop. So, if you are adding icons for 5 workspaces, make sure you have created 5 workspaces in the system settings.
    For example, in Xfce, you can search for Virtual Desktops in the menu and set the number of workspaces available in the system.
    The format options tells the bar what to show for which workspace. We have set it as label-state. This means, we will define some states (active, empty, occupied, urgent) for the workspaces and the display will be according to that.
    The format-font = 3 tells the polybar to use which font. Here, I have specified 3, that will refer to font-3 defined in the bar section. That is Symbols Nerd Font:size=20;4. Since I have pasted the nerd font logo from nerd fonts, this will be better to display them properly.
    Look at the code below:
    label-active = %icon% label-active-background = ${colors.background-alt} label-active-forground = #00000000 label-active-padding = 2 This sets the value %icon% when the workspace is active. When Polybar sees the %icon%, it will swap this with the icons defined above. That is icon-N. The rest options are visual changes for each of the state, like background color, foreground color, etc.
    If you are using nerd fonts for this, these fonts will change their color according to the set foreground color.
    Similar is done as needed for other states like empty, urgent, etc. It is up to your creativity to assign what values to these states to make it visually pleasing.
    0:00 /0:06 1× Switch Workspaces in Polybar
    What is the time now?
    A panel without a date is useless! Let's add a date block to Polybar.
    The type we use for a date module is:
    type = internal/date We need to format it, so that it looks better. So, take a look at the sample code below:
    [module/date] type = internal/date interval = 1.0 time = %I:%M %p date = %d-%m-%Y date-alt = "%{F#FF9535}%Y-%m-%d %I:%M:%S %p%{F-}" label = %date% %time% label-font = 5 label-foreground = ${colors.aurora-yellow} format = 󱑂 <label> format-prefix-font = 2 First is the refresh rate. We set the click to refresh every second with the interval = 1.0. The value is in seconds.
    Next, define what to show with the time key. It has to be in a format strftime. You can read the full format specification in the man page here.
    For now, we are using the format %I:%M %p, that will show the time as 12:30 PM.
    We are going a bit further to show you that there are more with date module.
    Use the date key to set the date format. I am using the format %d-%m-%Y, which will output 25-07-2025.
    The date-alt key can be used to show another date format when you click on the date module in the bar.
    💡You can remember like this; if there is an alt in the name of a key, then it define an action that is available upon clicking that module.The syntax %{F#RRGGBB} in Polybar is used to set the foreground color dynamically within the module’s label or format string. This is like <span> tag in the HTML codes.
    So this will tell Polybar “from here on, use this foreground (text) color,” and once the %{F-} is spotted, reset it to general flow, or what was before.
    So, according to the code, when we click on the date module, it will show the detailed date format as %Y-%m-%d %I:%M:%S %p, which in real world, 2025-07-25 12:30:25 PM.
    0:00 /0:07 1× Showing date in Polybar with an alternate format
    The label = %date% %time%, make sure the bar will show date and time properly.
    The format = 󱑂 <label> will show the date with a preceding nerd font icon.
    It is in the format key, you add icons/glyphs to appear on the bar most of the time.
    How do I change the volume?
    Most common way to change the volume in most system is to scroll on the volume button on panel. This is possible with Polybar as well.
    Let's see a code for the module:
    [module/pulseaudio] type = internal/pulseaudio format-volume-prefix-foreground = ${colors.primary} format-volume = <label-volume> <ramp-volume> label-volume = %percentage%% use-ui-max = false click-right = pavucontrol label-muted = " Mute" label-muted-foreground = ${colors.disabled} format-muted = <label-muted> format-muted-prefix = 󰝟 format-muted-prefix-font = 2 format-muted-padding = 1 ; Ramp settings using <ramp-volume> used for Pulseaudio ramp-volume-0 = 󰝟 ramp-volume-1 = ▁ ramp-volume-2 = ▂ ramp-volume-3 = ▃ ramp-volume-4 = ▄ ramp-volume-5 = ▅ ramp-volume-6 = ▆ ramp-volume-7 = ▇ ramp-volume-8 = █ ramp-volume-font = 2 As you expected, type = internal/pulseaudio is the module type.
    The next entry to look is format-volume. Here, we see a new item called <ramp-volume>. And if you look further down the code, you can see I have defined 9 levels (0 to 8) of ramp.
    This ramp-<item> is available in some other module also. So, understanding it here is better to use them as required. For example, the cpu module give a ramp-coreload, memory module gives ramp-used and ramp-free, etc.
    It shows a visual volume indicator (like volume bars or icons) depending on the number of ramp levels. For example, in the above volume, the 100% volume level is divided into 9 equal ranges. So, when the volume is increased, an appropriate bar is shown.
    0:00 /0:10 1× Change the volume with ramps
    Another useful options are the mouse-click items. Generally, you have three of them available:
    click-left click-middle click-right It is not limited to pulseaudio, you can use it in some other modules also. For that, refer to the wiki page.
    Tray
    Many apps needs an active tray module to work. Discord, Spotify, Ksnip, Flameshot, all provides a close to tray option as well.
    In Polybar, you will be using the tray module for this purpose.
    [module/tray] type = internal/tray format-margin = 8px tray-spacing = 8px It has several option you can try, in the official wiki. Rewriting them here is not an efficient way, since a bare module serves most purposes.
    🚧In Linux systems, only one panel can take the tray. So, you only needed to add it in one tray. Similarly, in Xfce and other distros, which by default offers a panel with tray, using the tray module will not work properly.Scripts and Custom Module
    This is not the scope of this article to explain bash shell scripts/ python scripts. But we will see custom modules in Polybar, that you can use to extend the function to next level.
    But, with Polybar, you can create shell scripts and then use it at places in modules. For example, take a look at the code below, that defines a custom module to show any package update available in Arch Linux:
    [module/pacupdate] type = custom/script exec = /home/$USER/.config/polybar/pacupdates.sh interval = 1000 label = %output% format-font = 3 click-left = notify-send "Updates:" "$(checkupdates)" As you can see, I got the type as custom/script from the wiki for scripts.
    Check the exec field. It points what to execute in the module. This can either be a simple command or point to the path to a script. Here, I pointed it to a script called pacupdates located on my ~/.config/polybar/ directory.
    The contents of the script is available in our GitHub repo. What it does is check and tell whether any package update is available.
    0:00 /0:06 1× A custom script that will print what updates is available in the system when clicked on it
    This is not an in-built module in Polybar. We have created it. With that, let's see a general syntax for custom modules:
    [module/MODULE_NAME] type = custom/script exec = COMMAND_OR_SCRIPT_PATH interval = SECONDS label = %output% format = <label> format-prefix = "ICON_OR_TEXT " format-prefix-font = FONT_INDEX click-left = COMMAND_ON_LEFT_CLICK click-right = COMMAND_ON_RIGHT_CLICK click-middle = COMMAND_ON_MIDDLE_CLICK The %output% value to the label (if you remember, you have seen %icon% earlier) refers to the output of the exec field.
    We have seen other values in various other sections above.
    Before we finish, take a look at one more custom module example, which when clicked opens rofi:
    [module/mymenu] type = custom/text format = <label> format-padding = 2 label = "%{F#1A1B26} Menu%{F-}" click-left = /home/sreenathv/.config/polybar/rofi.sh format-background = ${colors.aurora-blue} Menu Button (Click to enlarge the image)Do not forget to add these to the tray after defined otherwise they won't appear.
    Wrapping Up
    Apart from the modules we discussed, there are many other modules that you can use. We have provided a ready to use Polybar config with several scripts in out GitHub page.
    Take a look at the lines on code in that files and get a better grasp of Polybar config.
    I hope you liked this detailed guide to Polybar customization. If you have any questions or suggestions, please leave a comment and I'll be happy to answer them.
  16. By: Janus Atienza
    Sat, 20 Sep 2025 19:41:56 +0000

    Embracing Next-Level Linux Security Challenges
    Linux runs everything from bleeding-edge research clusters to billion-dollar e-commerce backbones, which makes it a fat target for anyone with skill and bad intentions. The platform’s openness is its strength, but that same transparency gives attackers a clear view of the terrain. In recent years, cryptojacking campaigns have burrowed into unpatched kernels, and supply chain compromises have slipped into package repositories. Rootkits now arrive disguised as flawless kernel modules. If you manage Linux environments, complacency is your most dangerous vulnerability. Expect practical insights here, not hollow pep talks. You will leave with real, applicable strategies.
    Why an Online Cybersecurity Degree in Florida Aligns with Linux Security Goals
    If you’re serious about hardening Linux systems, you need more than scattered study sessions and scattered blog posts. Online programs give you the breathing room to keep working while absorbing structured, high-caliber material. They draw students from wildly diverse backgrounds, which means your discussion forums and group projects mirror the heterogeneity of the global threatscape. Accredited programs with faculty who have dissected live intrusions on Linux servers bring the fight closer to reality.
    Choosing an online cyber security degree florida means tapping into a curriculum that delivers theory, then forces you to wrestle with it in OS-specific labs. Florida’s programs tend to keep a balanced diet of deep technical dives and strategic risk analysis, letting you master packet-level configurations without drifting into abstract coursework with no connection to practice.
    Core Linux Security Topics Covered by Florida’s Cybersecurity Programs
    Florida’s stronger programs refuse to skim. You will tackle Linux kernel hardening, locking down the attack surface through parameters and module control. SELinux and AppArmor policies aren’t just read about; you’ll tune them to protect production processes without breaking critical ops. Secure shell configuration goes beyond PermitRootLogin no to controlling ciphers, key lengths, and brute-force detection. These topics aren’t busywork. When you learn container isolation, you’ll think about cgroups and namespaces, not just Docker defaults. Package management security means signing, verifying, and understanding when a repository has been poisoned, not just apt-get update.
    Hands-On Virtual Labs: Turning Theory into Linux Expertise
    You’ll set up hardened VMs as playgrounds and battlegrounds. Attack simulation injects actual strain on configurations, followed by countermeasures like policy adjustments or packet filtering. In a single lab cycle, you might spin up a fresh Debian image, run OpenVAS scans, capture suspect traffic via Wireshark, then tighten firewall rules with Netfilter. Labs are not optional frills. They engrain muscle memory and command-line reflexes, which is exactly what you want when your real systems are bleeding packets at 3 a.m.
    From Classroom to Command Line: Applying Skills in Real Linux Environments
    Graduates don’t leave theory in the LMS archive. They bring it to bear during system audits, patch rollouts, and log forensics. A graduate uses cron-based audits to catch misconfigurations before a breach window opens. Log parsers flag anomalies tied to newly imported packages. Patches are timed strategically to minimize exposure without derailing uptime SLAs. Keep a short checklist: continuously monitor kernel versions, verify integrity of critical binaries, audit sudoers, and rotate keys before they become stale liabilities.
    Career Paths Fueled by a Florida Cybersecurity Degree and Linux Mastery
    Your Linux expertise is currency. Cash it in as a Linux security engineer, cloud security specialist, or DevSecOps practitioner trusted to secure CI/CD pipelines. Entry-level expectations often include comfort with scripting, understanding of network stack behavior, and practical exposure to security frameworks. The degree closes gaps in both strategy and execution. Add weight to your resume by earning certifications like LPIC-3 or RHCE Security once your course load eases.
    Charting Your Next Steps in Linux-Focused Cybersecurity
    Start by targeting programs whose syllabi include the Linux security modules outlined above. Scrutinize course descriptions and faculty bios. Build a personal study plan that integrates formal assignments with your own system-hardening experiments. Careers here reward those who never let their skillset fossilize. Combine the credibility of a Florida-based online degree with relentless pursuit of Linux mastery, and you’ll stay ahead of whatever brute-force, zero-day, or poisoned package comes at you next.
    The post Strengthening Linux Defenses with an Online Cybersecurity Degree appeared first on Unixmen.

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.