Jump to content

Blogger

Blog Bot
  • Joined

  • Last visited

    Never

Blog Entries posted by Blogger

  1. Blogger

    Formatting Text in Logseq

    by: Sreenath
    Fri, 11 Apr 2025 15:09:58 GMT

    Logseq is a highly efficient note-taking and knowledge management app with decent Markdown support.
    While using Logseq, one thing to keep in mind is that the text formatting isn't pure Markdown. This is because Logseq uses bullet blocks as the basic unit of content and also supports Org-mode.
    Whenever you start a new document or press Enter after a sentence, a new block is created — and this block can be referenced from anywhere within Logseq. That’s part of what makes Logseq so powerful.
    Still, formatting your notes clearly is just as important. In this article, we’ll take a closer look at how text formatting works in Logseq.
    Basic Markdown syntax
    As I said above, since Logseq supports Markdown, all the basic Markdown syntax will work here.
    You remember the Markdown syntax, right?
    Description Markdown Syntax Six Levels of Heading # Level One
    ## Level Two
    ### Level Three
    #### Level Four
    ##### Level Five
    ###### Level Six Hyprlink [Link Text](Link Address/URL) Image ![Image Caption](Image path) Bold Text **Bold Text** Italics Text *Italics* Striked-out Text ~~Striked-out Text~~ In-line code `inline code` Code block ```
    code block
    ``` Table |Column Header|Column Header|
    | ---------------- | ---------------|
    | Items | Items |
    Logseq Markdown Rendering💡You can press the / key to get all the available format options.Adding quotes
    Quotes can be added in Logseq using two methods.
    First, using the traditional Markdown method of adding a quote by using > in front of the text.
    > This should appear as a quote Second, since Logseq has Org-mode support, you can create a quote block using the syntax:
    #+BEGIN_QUOTE Your Quote text here #+END_QUOTE You can access this by pressing < key and then typing Quote and enter.
    🚧If you are using the quotes with a preceding > syntax, then every markdown renderer will render the document properly. The org-mode syntax won't work in all environments. 0:00 /0:15 1× Adding Quotes in Logseq
    Add an admonition block
    Admonition blocks or callouts come in handy for highlighting particular piece of information in your notes, like a tip or a warning.
    The warning below is the best example here.
    🚧These admonition blocks are a feature of Logseq app. You cannot expect this to work properly in other apps. So, plain text markdown users should take care in this scenario.The usual Org-mode syntax for these blocks is:
    #+BEGIN_<BLOCK NAME> Your Block Text #+END_<BLOCK NAME> For example, a simple tip block syntax looks like:
    #+BEGIN_TIP This is a tip block #+END_TIP Let's take a look at some other interesting syntax names:
    BLOCK NAME NOTE TIP IMPORTANT CAUTION PINNED Admonition Blocks in Logseq.You can access this by typing the < key and then searching for the required block.
    0:00 /0:27 1× Admonition blocks in Logseq.
    Conclusion
    The ability to add a call out box makes your notes more useful, in my opinion. At least it does for me as I can highlight important information in my notes. I am a fan of them and you can see plenty of them in my articles on It's FOSS as well.
    Stay tuned with me in this series as I'll share about adding references in Logseq in the next part.
  2. Blogger

    CSS-Tricks Chronicles XLIII

    by: Geoff Graham
    Fri, 11 Apr 2025 12:39:26 +0000

    Normally, I like to publish one of these updates every few months. But seeing as the last one dates back to September of last year, I’m well off that mark and figured it’s high time to put pen to paper. The fact is that a lot is happening around here at CSS-Tricks — and it’s all good stuff.
    The Almanac is rolling
    In the last post of 2024, I said that filling the Almanac was a top priority heading into this year. We had recently refreshed the whole dang thing, complete with completely new sections for documenting CSS selectors, at-rules, and functions on top of the sections we already had for properties and pseudo-selectors. The only problem is that those new sections were pretty bare.
    Well, not only has this team stepped up to produce a bunch of new content for those new sections, but so have you. Together, we’ve published 21 new Almanac entries since the start of 2025. Here they are in all their glory:
    animation-timeline interpolate-size overlay @charset @counter-style @import @keyframes @namspace @page @view-transition attr() calc-size() counter() counters() hsl() lab() lch() light-dark() oklch() rgb() symbols() What’s even better? There are currently fourteen more in the hopper that we’re actively working on. I certainly do not expect us to sustain this sort of pace all year. A lot of work goes into each and every entry. Plus, if all we ever did was write in Almanac, we would never get new articles and tutorials out to you, which is really what we’re all about around here.
    A lot of podcasts and events
    Those of you who know me know that I’m not the most social person in all the land. Yes, I like hanging out with folks and all that, but I tend to keep my activities to back-of-the-house stuff and prefer to stay out of view.
    So, that’s why it’s weird for me to call out a few recent podcast and event appearances. It’s not like I do these things all that often, but they are fun and I like to note them, even if its only for posterity.
    I hosted Smashing Meets Accessibility, a mini online conference that featured three amazing speakers talking about the ins and outs of WCAG conformance, best practices, and incredible personal experiences shaped by disability. I hosted Smashing Meets CSS, another mini conference from the wonderful Smashing Magazine team. I got to hang out with Adam Argyle, Julia Micene, and Miriam Suzanne, all of whom blew my socks off with their presentations and panel discussion on what’s new and possible in modern CSS. I’m co-hosting a brand-new podcast with Brad Frost called Open Up! We recorded the first episode live in front of an audience that was allowed to speak up and participate in the conversation. The whole idea of the show is that we talk more about the things we tend to talk less about in our work as web designers and developers — the touchy-feely side of what we do. We covered so many heady topics, from desperation during layoffs to rediscovering purpose in your work. I was a guest on the Mental Health in Tech podcast, joining a panel of other front-enders to discuss angst in the face of recent technological developments. The speed and constant drive to market new technologies is dizzying and, to me at least, off-putting to the extent that I’ve questioned my entire place in it as a developer. What a blast getting to return to the podcast a second time and talk shop with a bunch of the most thoughtful, insightful people you’ll ever hear. I’ll share that when it’s published. A new guide on styling counters
    We published it just the other week! I’ll be honest and say that a complete guide about styling counters in CSS was not the first thing that came to my mind when we started planning new ideas, but I’ll be darned if Juan didn’t demonstrate just how big a topic it is. There are so many considerations when it comes to styling counters — design! accessibility! semantics! — and the number of tools we have in CSS to style them is mind-boggling, including two functions that look very similar but have vastly different capabilities for creating custom counters — counter() and counters() (which are also freshly published in the Almanac).
    At the end of last year, I said I hoped to publish 1-2 new guides, and here we are in the first quarter of 2025 with our first one out in the wild! That gives me hope that we’ll be able to get another comprehensive guide out before the end of the year.
    Authors
    I think the most exciting update of all is getting to recognize the folks who have published new articles with us since the last update. Please help me extend a huge round of applause to all the faces who have rolled up their sleeves and shared their knowledge with us.
    Lee Meyer Zell Liew Andy Clarke (I can’t believe it!) Temani Afif Andy Bell Preethi Daniel Schwarz Bryan Robinson Sunkanmi Fafowora And, of course, nothing on this site would be possible without ongoing help from Juan Diego Rodriguez and Ryan Trimble. Those two not only do a lot of heavy lifting to keep the content machine fed, but they are also just two wonderful people who make my job a lot more fun and enjoyable. Seriously, guys, you mean a lot to this site and me!
    CSS-Tricks Chronicles XLIII originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  3. Blogger
    by: Abhishek Prakash
    Fri, 11 Apr 2025 17:22:49 +0530

    Linux can feel like a big world when you're just getting started — but you don’t have to figure it all out on your own.
    Each edition of LHB Linux Digest brings you clear, helpful articles and quick tips to make everyday tasks a little easier.
    Chances are, a few things here will click with you — and when they do, try working them into your regular routine. Over time, those small changes add up and before you know it, you’ll feel more confident and capable navigating your Linux setup.
    Here are the highlights of this edition:
    Running sudo without password Port mapping in Docker Docker log viewer tool And more tools, tips and memes for you This edition of LHB Linux Digest newsletter is supported by Typesense. ❇️ Typesense, Open Source Algolia Alternative
    Typesense is the free, open-source search engine for forward-looking devs.
    Make it easy on people: Tpyos? Typesense knows we mean typos, and they happen. With ML-powered typo tolerance and semantic search, Typesense helps your customers find what they’re looking for—fast.
    👉 Check them out on GitHub.
    GitHub - typesense/typesense: Open Source alternative to Algolia + Pinecone and an Easier-to-Use alternative to ElasticSearch ⚡ 🔍 ✨ Fast, typo tolerant, in-memory fuzzy Search Engine for building delightful search experiencesOpen Source alternative to Algolia + Pinecone and an Easier-to-Use alternative to ElasticSearch ⚡ 🔍 ✨ Fast, typo tolerant, in-memory fuzzy Search Engine for building delightful search experiences -…GitHubtypesense  
     
      This post is for subscribers only
    Subscribe now Already have an account? Sign in
  4. Blogger

    Tailwind’s @apply Feature is Better Than it Sounds

    by: Zell Liew
    Thu, 10 Apr 2025 12:39:43 +0000

    By this point, it’s not a secret to most people that I like Tailwind.
    But, unknown to many people (who often jump to conclusions when you mention Tailwind), I don’t like vanilla Tailwind. In fact, I find most of it horrible and I shall refrain from saying further unkind words about it.
    But I recognize and see that Tailwind’s methodology has merits — lots of them, in fact — and they go a long way to making your styles more maintainable and performant.
    Today, I want to explore one of these merit-producing features that has been severely undersold — Tailwind’s @apply feature.
    What @apply does
    Tailwind’s @apply features lets you “apply” (or simply put, copy-and-paste) a Tailwind utility into your CSS.
    Most of the time, people showcase Tailwind’s @apply feature with one of Tailwind’s single-property utilities (which changes a single CSS declaration). When showcased this way, @apply doesn’t sound promising at all. It sounds downright stupid. So obviously, nobody wants to use it.
    /* Input */ .selector { @apply p-4; } /* Output */ .selector { padding: 1rem; } To make it worse, Adam Wathan recommends against using @apply, so the uptake couldn’t be worse.
    Personally, I think Tailwind’s @apply feature is better than described.
    Tailwind’s @apply is like Sass’s @includes
    If you have been around during the time where Sass is the dominant CSS processing tool, you’ve probably heard of Sass mixins. They are blocks of code that you can make — in advance — to copy-paste into the rest of your code.
    To create a mixin, you use @mixin To use a mixin, you use @includes // Defining the mixin @mixin some-mixin() { color: red; background: blue; } // Using the mixin .selector { @include some-mixin(); } /* Output */ .selector { color: red; background: blue; } Tailwind’s @apply feature works the same way. You can define Tailwind utilities in advance and use them later in your code.
    /* Defining the utility */ @utility some-utility { color: red; background: blue; } /* Applying the utility */ .selector { @apply some-utility; } /* Output */ .selector { color: red; background: blue; } Tailwind utilities are much better than Sass mixins
    Tailwind’s utilities can be used directly in the HTML, so you don’t have to write a CSS rule for it to work.
    @utility some-utility { color: red; background: blue; } <div class="some-utility">...</div> On the contrary, for Sass mixins, you need to create an extra selector to house your @includes before using them in the HTML. That’s one extra step. Many of these extra steps add up to a lot.
    @mixin some-mixin() { color: red; background: blue; } .selector { @include some-mixin(); } /* Output */ .selector { color: red; background: blue; } <div class="selector">...</div> Tailwind’s utilities can also be used with their responsive variants. This unlocks media queries straight in the HTML and can be a superpower for creating responsive layouts.
    <div class="utility1 md:utility2">…</div> A simple and practical example
    One of my favorite — and most easily understood — examples of all time is a combination of two utilities that I’ve built for Splendid Layouts (a part of Splendid Labz):
    vertical: makes a vertical layout horizontal: makes a horizontal layout Defining these two utilities is easy.
    For vertical, we can use flexbox with flex-direction set to column. For horizontal, we use flexbox with flex-direction set to row. @utility horizontal { display: flex; flex-direction: row; gap: 1rem; } @utility vertical { display: flex; flex-direction: column; gap: 1rem; } After defining these utilities, we can use them directly inside the HTML. So, if we want to create a vertical layout on mobile and a horizontal one on tablet or desktop, we can use the following classes:
    <div class="vertical sm:horizontal">...</div> For those who are new to Tailwind, sm: here is a breakpoint variant that tells Tailwind to activate a class when it goes beyond a certain breakpoint. By default, sm is set to 640px, so the above HTML produces a vertical layout on mobile, then switches to a horizontal layout at 640px.
    Open Live Demo If you prefer traditional CSS over composing classes like the example above, you can treat @apply like Sass @includes and use them directly in your CSS.
    <div class="your-layout">...</div> .your-layout { @apply vertical; @media (width >= 640px) { @apply horizontal; } } The beautiful part about both of these approaches is you can immediately see what’s happening with your layout — in plain English — without parsing code through a CSS lens. This means faster recognition and more maintainable code in the long run.
    Tailwind’s utilities are a little less powerful compared to Sass mixins
    Sass mixins are more powerful than Tailwind utilities because:
    They let you use multiple variables. They let you use other Sass features like @if and @for loops. @mixin avatar($size, $circle: false) { width: $size; height: $size; @if $circle { border-radius: math.div($size, 2); } } On the other hand, Tailwind utilities don’t have these powers. At the very maximum, Tailwind can let you take in one variable through their functional utilities.
    /* Tailwind Functional Utility */ @utility tab-* { tab-size: --value(--tab-size-*); } Fortunately, we’re not affected by this “lack of power” much because we can take advantage of all modern CSS improvements — including CSS variables. This gives you a ton of room to create very useful utilities.
    Let’s go through another example
    A second example I often like to showcase is the grid-simple utility that lets you create grids with CSS Grid easily.
    We can declare a simple example here:
    @utility grid-simple { display: grid; grid-template-columns: repeat(var(--cols), minmax(0, 1fr)); gap: var(--gap, 1rem); } By doing this, we have effectively created a reusable CSS grid (and we no longer have to manually declare minmax everywhere).
    After we have defined this utility, we can use Tailwind’s arbitrary properties to adjust the number of columns on the fly.
    <div class="grid-simple [--cols:3]"> <div class="item">...</div> <div class="item">...</div> <div class="item">...</div> </div> To make the grid responsive, we can add Tailwind’s responsive variants with arbitrary properties so we only set --cols:3 on a larger breakpoint.
    <div class="grid-simple sm:[--cols:3]"> <div class="item">...</div> <div class="item">...</div> <div class="item">...</div> </div> Open Live Demo This makes your layouts very declarative. You can immediately tell what’s going on when you read the HTML.
    Now, on the other hand, if you’re uncomfortable with too much Tailwind magic, you can always use @apply to copy-paste the utility into your CSS. This way, you don’t have to bother writing repeat and minmax declarations every time you need a grid that grid-simple can create.
    .your-layout { @apply grid-simple; @media (width >= 640px) { --cols: 3; } } <div class="your-layout"> ... </div> By the way, using @apply this way is surprisingly useful for creating complex layouts! But that seems out of scope for this article so I’ll be happy to show you an example another day.
    Wrapping up
    Tailwind’s utilities are very powerful by themselves, but they’re even more powerful if you allow yourself to use @apply (and allow yourself to detach from traditional Tailwind advice). By doing this, you gain access to Tailwind as a tool instead of it being a dogmatic approach.
    To make Tailwind’s utilities even more powerful, you might want to consider building utilities that can help you create layouts and nice visual effects quickly and easily.
    I’ve built a handful of these utilities for Splendid Labz and I’m happy to share them with you if you’re interested! Just check out Splendid Layouts to see a subset of the utilities I’ve prepared.
    By the way, the utilities I showed you above are watered-down versions of the actual ones I’m using in Splendid Labz.
    One more note: When writing this, Splendid Layouts work with Tailwind 3, not Tailwind 4. I’m working on a release soon, so sign up for updates if you’re interested!
    Tailwind’s @apply Feature is Better Than it Sounds originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  5. Blogger
    by: Geoff Graham
    Thu, 10 Apr 2025 11:26:00 +0000

    If I were starting with CSS today for the very first time, I would first want to spend time understanding writing modes because that’s a great place to wrap your head around direction and document flow. But right after that, and even more excitedly so, I would jump right into display and get a firm grasp on layout strategies.
    And where would I learn that? There are lots of great resources out there. I mean, I have a full course called The Basics that gets into all that. I’d say you’d do yourself justice getting that from Andy Bell’s Complete CSS course as well.
    But, hey, here’s a brand new way to bone up on layout: Miriam Suzanne is running a workshop later this month. Cascading Layouts is all about building more resilient and maintainable web layouts using modern CSS, without relying on third-party tools. Remember, Miriam works on CSS specifications, is a core contributor to Sass, and is just plain an all-around great educator. There are few, if any, who are more qualified to cover the ins and outs of CSS layout, and I can tell you that her work really helped inspire and inform the content in my course. The workshop is online, runs April 28-30, and is a whopping $ 100 off if you register by April 12.
    Just a taste of what’s included:

    Cascading Layouts: A Workshop on Resilient CSS Layouts originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  6. Blogger
    by: Abhishek Prakash
    Thu, 10 Apr 2025 05:17:14 GMT

    Linux YouTuber Brodie Robertson liked It's FOSS' April Fool joke so much that he made a detailed video on it. It's quite fun to watch, actually 😄
    💬 Let's see what else you get in this edition
    A new APT release. Photo management software Steam Client offering many refinements for Linux. And other Linux news, tips, and, of course, memes! This edition of FOSS Weekly is supported by Internxt. SPONSORED ❇️ Future-Proof Your Cloud Storage With Post-Quantum Encryption

    Get 82% off any Internxt lifetime plan—a one-time payment for private, post-quantum encrypted cloud storage.

    No subscriptions, no recurring fees, 30-day money back policy.
    Get this deal 📰 Linux and Open Source News
    The Proton VPN app has received many refinements. Steam Client's April 2025 update has a lot to offer for Linux gamers. Pinta has launched a redesigned website in collaboration with RolandiXor. The APT 3.0 release has finally arrived with a better user experience.
    A Colorful APT 3.0 Release Impresses with its New FeaturesThe latest APT release features a new solver, alongside several user experience enhancements.It's FOSS NewsSourav Rudra🧠 What We’re Thinking About
    Mozilla has begun the initial implementation of AI features into Firefox.
    I Tried This Upcoming AI Feature in FirefoxFirefox will be bringing an experimental AI-generated link previews, offering quick on-device summaries. Here’s my quick experience with it.It's FOSS NewsSourav Rudra🧮 Linux Tips, Tutorials and More
    Get started with Pamac GUI package manager in Arch Linux. A list of photo management software on Linux. Learn how to install Logseq on your Linux system. 7 code editors you can use for Vibe Coding. 7 Code Editors You Can Use for Vibe Coding on LinuxWant to try vibe coding? Here are the best editors I recommend using on Linux.It's FOSSAbhishek Kumar Desktop Linux is mostly neglected by the industry but loved by the community. For the past 12 years, It's FOSS has been helping people use Linux on their personal computers. And we are now facing the existential threat from AI models stealing our content.
    If you like what we do and would love to support our work, please become It's FOSS Plus member. It costs $24 a year (less than the cost of a burger meal each month) and you get an ad-free reading experience with the satisfaction of helping the desktop Linux community.
    Join It's FOSS Plus 👷 Homelab and Maker's Corner
    This time, we have a DIY biosignal tool that can be used for neuroscience research and education purposes.
    DIY Neuroscience: Meet the Open Source PiEEG Kit for Brain and Body SignalsThe PiEEG kit is an open source, portable biosignal tool designed for research, measuring EEG, EMG, EKG, and EOG signals. Want to crowdfund the project?It's FOSS NewsSourav Rudra✨ Apps Highlight
    Clapgrep is a powerful open source search tool for Linux.
    Clapgrep: An Easy-to-Use Open Source Linux App To Search Through Your PDFs and Text DocumentsWant to look for something in your text documents? Use Clapgrep to quick search for it!It's FOSS NewsSourav Rudra📽️ Videos I am Creating for You
    See the new features in APT 3.0 in action in our latest video.
    Subscribe to It's FOSS YouTube Channel🧩 Quiz Time
    Take a trip down memory lane with our 80s Nostalgic Gadgets puzzle.
    80s Nostalgic GadgetsRemember the 80s? This quiz is for you :)It's FOSSAbhishek PrakashHow sharp is your Git knowledge? Our latest crossword will test your knowledge.
    💡 Quick Handy Tip
    In Firefox, you can delete temporary browsing data using the "Forget" button. First, right-click on the toolbar and select "Customize Toolbar".
    Now, from the list, drag and drop the "Forget" button to the toolbar. If you click on it, you will be asked to clear 5 min, 2 hrs, and 24 hrs of browsing data, pick any one of them and click on "Forget!".
    🤣 Meme of the Week
    The glow up is real with this one. 🤭
    🗓️ Tech Trivia
    On April 7, 1964, IBM introduced the System/360, the first family of computers designed to be fully compatible with each other. Unlike earlier systems, where each model had its own unique software and hardware.
    🧑‍🤝‍🧑 FOSSverse Corner
    One of our regular FOSSers played around with ARM64 on Linux and liked it.
    ARM64 on Linux is Fun!Hi, I’ve been playing with my Pinebook Pro lately and tried Armbian, Manjaro, Void and Gentoo on it. It’s been fun! New things learned like boot from u-boot, then moving to tow-boot as “first boot loader” which starts grub. I tried four distroes on a SD, Manjaro was the official and Armbian also was an .iso. Void and Gentoo I installed thrue chroot manually. I’m biased but it says something (at least I think so) that I did a Gentoo install twice to this small laptop. First one was just to try it…It's FOSS Communityihasama❤️ With love
    Share it with your Linux-using friends and encourage them to subscribe (hint: it's here).
    Share the articles in Linux Subreddits and community forums.
    Follow us on Google News and stay updated in your News feed.
    Opt for It's FOSS Plus membership and support us 🙏
    Enjoy FOSS 😄
  7. Blogger

    CSS Carousels

    by: Geoff Graham
    Wed, 09 Apr 2025 13:00:24 +0000

    The CSS Overflow Module Level 5 specification defines a couple of new features that are designed for creating carousel UI patterns:
    Scroll Buttons: Buttons that the browser provides, as in literal <button> elements, that scroll the carousel content 85% of the area when clicked. Scroll Markers: The little dots that act as anchored links, as in literal <a> elements that scroll to a specific carousel item when clicked. Chrome has prototyped these features and released them in Chrome 135. Adam Argyle has a wonderful explainer over at the Chrome Developer blog. Kevin Powell has an equally wonderful video where he follows the explainer. This post is me taking notes from them.
    First, some markup:
    <ul class="carousel"> <li>...</li> <li>...</li> <li>...</li> <li>...</li> <li>...</li> </ul> First, let’s set these up in a CSS auto grid that displays the list items in a single line:
    .carousel { display: grid; grid-auto-flow: column; } We can tailor this so that each list item takes up a specific amount of space, say 40%, and insert a gap between them:
    .carousel { display: grid; grid-auto-flow: column; grid-auto-columns: 40%; gap: 2rem; } This gives us a nice scrolling area to advance through the list items by moving left and right. We can use CSS Scroll Snapping to ensure that scrolling stops on each item in the center rather than scrolling right past them.
    .carousel { display: grid; grid-auto-flow: column; grid-auto-columns: 40%; gap: 2rem; scroll-snap-type: x mandatory; > li { scroll-snap-align: center; } } Kevin adds a little more flourish to the .carousel so that it is easier to see what’s going on. Specifically, he adds a border to the entire thing as well as padding for internal spacing.
    So far, what we have is a super simple slider of sorts where we can either scroll through items horizontally or click the left and right arrows in the scroller.
    We can add scroll buttons to the mix. We get two buttons, one to navigate one direction and one to navigate the other direction, which in this case is left and right, respectively. As you might expect, we get two new pseudo-elements for enabling and styling those buttons:
    ::scroll-button(left) ::scroll-button(right) Interestingly enough, if you crack open DevTools and inspect the scroll buttons, they are actually exposed with logical terms instead, ::scroll-button(inline-start) and ::scroll-button(inline-end).
    And both of those support the CSS content property, which we use to insert a label into the buttons. Let’s keep things simple and stick with “Left” and “Right” as our labels for now:
    .carousel::scroll-button(left) { content: "Left"; } .carousel::scroll-button(right) { content: "Right"; } Now we have two buttons above the carousel. Clicking them either advances the carousel left or right by 85%. Why 85%? I don’t know. And neither does Kevin. That’s just what it says in the specification. I’m sure there’s a good reason for it and we’ll get more light shed on it at some point.
    But clicking the buttons in this specific example will advance the scroll only one list item at a time because we’ve set scroll snapping on it to stop at each item. So, even though the buttons want to advance by 85% of the scrolling area, we’re telling it to stop at each item.
    Remember, this is only supported in Chrome at the time of writing:
    CodePen Embed Fallback We can select both buttons together in CSS, like this:
    .carousel::scroll-button(left), .carousel::scroll-button(right) { /* Styles */ } Or we can use the Universal Selector:
    .carousel::scroll-button(*) { /* Styles */ } And we can even use newer CSS Anchor Positioning to set the left button on the carousel’s left side and the right button on the carousel’s right side:
    .carousel { /* ... */ anchor-name: --carousel; /* define the anchor */ } .carousel::scroll-button(*) { position: fixed; /* set containment on the target */ position-anchor: --carousel; /* set the anchor */ } .carousel::scroll-button(left) { content: "Left"; position-area: center left; } .carousel::scroll-button(right) { content: "Right"; position-area: center right; } Notice what happens when navigating all the way to the left or right of the carousel. The buttons are disabled, indicating that you have reached the end of the scrolling area. Super neat! That’s something that is normally in JavaScript territory, but we’re getting it for free.
    CodePen Embed Fallback Let’s work on the scroll markers, or those little dots that sit below the carousel’s content. Each one is an <a> element anchored to a specific list item in the carousel so that, when clicked, you get scrolled directly to that item.
    We get a new pseudo-element for the entire group of markers called ::scroll-marker-group that we can use to style and position the container. In this case, let’s set Flexbox on the group so that we can display them on a single line and place gaps between them in the center of the carousel’s inline size:
    .carousel::scroll-marker-group { display: flex; justify-content: center; gap: 1rem; } We also get a new scroll-marker-group property that lets us position the group either above (before) the carousel or below (after) it:
    .carousel { /* ... */ scroll-marker-group: after; /* displayed below the content */ } We can style the markers themselves with the new ::scroll-marker pseudo-element:
    .carousel { /* ... */ > li::scroll-marker { content: ""; aspect-ratio: 1; border: 2px solid CanvasText; border-radius: 100%; width: 20px; } } When clicking on a marker, it becomes the “active” item of the bunch, and we get to select and style it with the :target-current pseudo-class:
    li::scroll-marker:target-current { background: CanvasText; } Take a moment to click around the markers. Then take a moment using your keyboard and appreciate that we can all of the benefits of focus states as well as the ability to cycle through the carousel items when reaching the end of the markers. It’s amazing what we’re getting for free in terms of user experience and accessibility.
    CodePen Embed Fallback We can further style the markers when they are hovered or in focus:
    li::scroll-marker:hover, li::scroll-marker:focus-visible { background: LinkText; } And we can “animate” the scrolling effect by setting scroll-behavior: smooth on the scroll snapping. Adam smartly applies it when the user’s motion preferences allow it:
    .carousel { /* ... */ @media (prefers-reduced-motion: no-preference) { scroll-behavior: smooth; } } Buuuuut that seems to break scroll snapping a bit because the scroll buttons are attempting to slide things over by 85% of the scrolling space. Kevin had to fiddle with his grid-auto-columns sizing to get things just right, but showed how Adam’s example took a different sizing approach. It’s a matter of fussing with things to get them just right.
    CodePen Embed Fallback This is just a super early look at CSS Carousels. Remember that this is only supported in Chrome 135+ at the time I’m writing this, and it’s purely experimental. So, play around with it, get familiar with the concepts, and then be open-minded to changes in the future as the CSS Overflow Level 5 specification is updated and other browsers begin building support.
    CSS Carousels originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  8. Blogger

    Port Mapping in Docker: When and How to Use it?

    by: Umair Khurshid
    Tue, 08 Apr 2025 12:11:49 +0530

    Port management in Docker and Docker Compose is essential to properly expose containerized services to the outside world, both in development and production environments.
    Understanding how port mapping works helps avoid conflicts, ensures security, and improves network configuration.
    This tutorial will walk you understand how to configure and map ports effectively in Docker and Docker Compose.
    What is port mapping in Docker?
    Port mapping exposes network services running inside a container to the host, to other containers on the same host, or to other hosts and network devices. It allows you to map a specific port from the host system to a port on the container, making the service accessible from outside the container.
    In the schematic below, there are two separate services running in two containers and both use port 80. Now, their ports are mapped with hosts port 8080 and 8090 and thus they are accessible from outside using these two ports.
    How to map ports in Docker
    Typically, a running container has its own isolated network namespace with its own IP address. By default, containers can communicate with each other and with the host system, but external network access is not automatically enabled.
    Port mapping is used to create communication between the container's isolated network and the host system's network.
    For example, let's map Nginx to port 80:
    docker run -d --publish 8080:80 nginxThe --publish command (usually shortened to -p) is what allows us to create that association between the local port (8080) and the port of interest to us in the container (80).
    In this case, to access it, you simply use a web browser and access http://localhost:8080
    On the other hand, if the image you are using to create the container has made good use of the EXPOSE instructions, you can use the command in this other way:
    docker run -d --publish-all hello-worldDocker takes care of choosing a random port (instead of the port 80 or other specified ports) on your machine to map with those specified in the Dockerfile:
    Mapping ports with Docker Compose
    Docker Compose allows you to define container configurations in a docker-compose.yml. To map ports, you use the ports YAML directive.
    version: '3.8' services: web: image: nginx:latest ports: - "8080:80" In this example, as in the previous case, the Nginx container will expose port 80 on the host's port 8080.
    Port mapping vs. exposing
    It is important not to confuse the use of portswith expose directives. The former creates a true port forwarding to the outside. The latter only serves to document that an internal port is being used by the container, but does not create any exposure to the host.
    services: app: image: myapp expose: - "3000" In this example, port 3000 will only be accessible from other containers in the same Docker network, but not from outside.
    Mapping Multiple Ports
    You just saw how to map a single port, but Docker also allows you to map more than one port at a time. This is useful when your container needs to expose multiple services on different ports.
    Let's configure a nginx server to work in a dual stack environment:
    docker run -p 8080:80 -p 443:443 nginx Now the server to listen for both HTTP traffic on port 8080, mapped to port 80 inside the container and HTTPS traffic on port 443, mapped to port 443 inside the container.
    Specifying host IP address for port binding
    By default, Docker binds container ports to all available IP addresses on the host machine. If you need to bind a port to a specific IP address on the host, you can specify that IP in the command. This is useful when you have multiple network interfaces or want to restrict access to specific IPs.
    docker run -p 192.168.1.100:8080:80 nginx This command binds port 8080 on the specific IP address 192.168.1.100 to port 80 inside the container.
    Port range mapping
    Sometimes, you may need to map a range of ports instead of a single port. Docker allows this by specifying a range for both the host and container ports. For example,
    docker run -p 5000-5100:5000-5100 nginx This command maps a range of ports from 5000 to 5100 on the host to the same range inside the container. This is particularly useful when running services that need multiple ports, like a cluster of servers or applications with several endpoints.
    Using different ports for host and container
    In situations where you need to avoid conflicts, security concerns, or manage different environments, you may want to map different port numbers between the host machine and the container. This can be useful if the container uses a default port, but you want to expose it on a different port on the host to avoid conflicts.
    docker run -p 8081:80 nginx This command maps port 8081 on the host to port 80 inside the container. Here, the container is still running its web server on port 80, but it is exposed on port 8081 on the host machine.
    Binding to UDP ports (if you need that)
    By default, Docker maps TCP ports. However, you can also map UDP ports if your application uses UDP. This is common for protocols and applications that require low latency, real-time communication, or broadcast-based communication.
    For example, DNS uses UDP for query and response communication due to its speed and low overhead. If you are running a DNS server inside a Docker container, you would need to map UDP ports.
    docker run -p 53:53/udp ubuntu/bind9 Here this command maps UDP port 53 on the host to UDP port 53 inside the container.
    Inspecting and verifying port mapping
    Once you have set up port mapping, you may want to verify that it’s working as expected. Docker provides several tools for inspecting and troubleshooting port mappings.
    To list all active containers and see their port mappings, use the docker ps command. The output includes a PORTS column that shows the mapping between the host and container ports.
    docker ps This might output something like:
    If you more detailed information about a container's port mappings, you can use docker inspect. This command gives you a JSON output with detailed information about the container's configuration.
    docker inspect <container_id> | grep "Host" This command will display the port mappings, such as:
    Wrapping Up
    Learn Docker: Complete Beginner’s CourseLearn Docker, an important skill to have for any DevOps and modern sysadmin. Learn all the essentials of Docker in this series.Linux HandbookAbdullah TarekWhen you are first learning Docker, one of the more tricky topics is often networking and port mapping. I hope this rundown has helped clarify how port mapping works and how you can effectively use it to connect your containers to the outside world and orchestrate services across different environments.
  9. Blogger

    Chris’ Corner: 10 HTML Hits

    by: Chris Coyier
    Mon, 07 Apr 2025 17:01:01 +0000

    Love HTML? Good. It’s very lovable. One of my favorite parts is how you can screw it all up and it still it’s absolute best to render how it thinks you meant. Not a lot of other languages like that. Are there any? English, I suppose lolz. Anyway — I figured I’d just share 10 links about HTML that I’ve save because, well, I personally thought there were interesting and enjoyed learning what they had to teach.
    The selected date must be within the last 10 years by Gerardo Rodriguez — That’s the requirement. How do you do it? Definitely use the HTML validation features for the date input. But unfortunately, JavaScript will need to be involved to set them, which means timezones and all that. Just because you can doesn’t mean you should: the <meter> element by Sophie Koonin — I think the HTML
    Heat the oven to <meter min="200" max="500" value="350">350 degrees</meter> is hilarious(ly silly). As Sophie puts it, this is the “letter, not the spirit, of semantic HTML.” Fine-tuning Text Inputs by Garrett Dimon — One of those articles that reminds you that there are actually quite a few bonus attributes for HTML inputs and they are all actually pretty useful and you should consider using them. How to think about HTML responsive images by Dan Cătălin Burzo — Speaking of loads of fairly complex attributes, it’s hard to beat responsive images in that regard. Always interesting to read something thinking through it all. This always leads me to the conclusion that it absolutely needs to be automated. Styling an HTML dialog modal to take the full height of the viewport by Simon Willison — Sometimes unexpected browser styles can bite you. DevTools help uncover that of course… if you can see them. I was once very confused about weird behaving dialogs when I set it to be position: relative; (they are fixed by browser styles), so watch for that too. Foundations: grouping forms with <fieldset> and <legend> by Demelza Feltham — “Accessible grouping benefits everyone”. Let’s clean up the weird styling of fieldset while we’re at it. HTML Whitespace is Broken by Douglas Parker — I think a lot of us have a developed intuition of how HTML uses or ignores whitespace, but Douglas manages to point out some real oddities I hadn’t thought of clearly defining before. Like if there is a space both before and after a </a>, the space will collapse, but with the single space be part of the link or not? Formatters struggle with this as their formatting can introduce output changes. It’s one reason I like JSX because it’s ultra opinionated on formatting and how spacing is handled. A highly configurable switch component using modern CSS techniques by Andy Bell — We’ve got <input type="checkbox" switch> coming (it’s in Safari now), but if you can’t wait you can build your own, as long as you are careful. Building a progress-indicating scroll-to-top button in HTML and CSS by Manuel Matuzović — I like that trick where you can use scroll-driven animations to only reveal a “scroll back to the top” button after you’ve scrolled down a bit. I also like Manuel’s idea here where the button itself fills up as you scroll down. I generally don’t care for scroll progress indicators, but this is so subtle it’s cool. Test-driven HTML and accessibility by David Luhr — I’ve written Cypress tests before and this echos of that but feels kinda lower level. It’s interesting looking at Server JavaScript executing DOM JavaScript with expect tests. I suppose it’s a bit like building your own aXe.
  10. Blogger

    Installing Logseq Knowledge Management Tool on Linux

    by: Sreenath
    Mon, 07 Apr 2025 16:18:54 GMT

    Logseq is a versatile open source tool for knowledge management. It is regarded as one of the best open source alternatives to the popular proprietary tool Obsidian.
    While it covers the basics of note-taking, it also doubles down as a powerful task manager and journaling tool.
    Logseq DesktopWhat sets Logseq apart from traditional note-taking apps is its unique organization system, which forgoes hierarchical folder structures in favor of interconnected, block-based notes. This makes it an excellent choice for users seeking granular control and flexibility over their information.
    In this article, we’ll explore how to install Logseq on Linux distributions.
    Use the official AppImage
    For Linux systems, Logseq officially provides an AppImage. You can head over to the downloads page and grab the AppImage file.
    Download LogseqIt is advised to use tools like AppImageLauncher (hasn't seen a new release for a while, but it is active) or GearLever to create a desktop integration for Logseq.
    Fret not, if you would rather not use a third-party tool, you can do it yourself as well.
    First, create a folder in your home directory to store all the AppImages. Next, move the Logseq AppImage to this location and give the file execution permission.
    Go to AppImage propertiesRight-click on the AppImage file and go to the file properties. Here, in the Permissions tab, select "Allow Executing as a Program" or "Executable as Program" depending on the distro, but it has the same meaning.
    Here's how it looks on a distribution with GNOME desktop:
    Toggle Execution permissionOnce done, you can double-click to open Logseq app.
    🚧If you are using Ubuntu 24.04 and above, you won't be able to open the AppImage of Logseq due to a change in the apparmour policy. You can either use other sources like Flatpak or take a look at a less secure alternative.Alternatively, use the 'semi-official' Flatpak
    Logseq has a Flatpak version available. This is not an official offering from the Logseq team, but is provided by a developer who also contributes to Logseq.
    First, make sure your system has Flatpak support. If not, enable Flatpak support and add Flathub repository by following our guide:
    Using Flatpak on Linux [Complete Guide]Learn all the essentials for managing Flatpak packages in this beginner’s guide.It's FOSSAbhishek PrakashNow, install Logseq either from a Flatpak supported software center like GNOME Software:
    Install Logseq from GNOME SoftwareOr install it using the terminal with the following command:
    flatpak install flathub com.logseq.LogseqOther methods
    For Ubuntu users and those who have Snap setup, there is an unofficial Logseq client in the Snap store. You can go with that if you prefer.
    There are also packages available in the AUR for Logseq desktop clients. Arch Linux users can take a look at these packages and get it installed via the terminal using Pamac package manager.
    Post Installation
    Once you have installed Logseq, open it. This will bring you to the temporary journal page.
    You need to open a local folder for Logseq to start your work to avoid potential data loss. For this, click on the "Add a graph" button on the top-right, as shown in the screenshot below.
    Click on "Add a graph"On the resulting page, click on "Choose a folder" button.
    Click "Choose a folder" From the file chooser, either create a new directory or select an existing directory and click "Open".
    Select a locationThat's it. You can start using Logseq now. And I'll help you with that. I'll be sharing regular tutorials on using Logseq for the next few days/weeks here. Stay tuned.
  11. Blogger
    by: Lee Meyer
    Mon, 07 Apr 2025 14:41:53 +0000

    When I was young and dinosaurs walked the earth, I worked on a software team that developed a web-based product for two years before ever releasing it. I don’t just mean we didn’t make it publicly available; we didn’t deploy it anywhere except for a test machine in the office, accessed by two internal testers, and this required a change to each tester’s hosts file. You don’t have to be an agile evangelist to spot the red flag. There’s “release early, release often,” which seemed revolutionary the first time I heard it after living under a waterfall for years, or there’s building so much while waiting so long to deploy that you guarantee weird surprises in a realistic deployment, let alone when you get feedback from real users. I’m told the first deployment experience to a web farm was very special.
    A tale of a dodgy deployment
    Being a junior, I was spared being involved in the first deployment. But towards the end of the first three-month cycle of fixes, the team leader asked me, “Would you be available on Tuesday at 2 a.m. to come to the office and do a deployment?”
    “Yep, sure, no worries.” I went home thinking what a funny dude my team leader was.
    So on Tuesday at 9 a.m., I show up and say good morning to the team leader and the architect, who sit together staring at one computer. I sit down at my dev machine and start typing.
    “Man, what happened?” the team leader says over the partition. “You said you’d be here at 2 a.m.”
    I look at him and see he is not smiling. I say, ”Oh. I thought you were joking.”
    “I was not joking, and we have a massive problem with the deployment.”
    Uh-oh.
    I was junior and did not have the combined forty years of engineering experience of the team leader and architect, but what I had that they lacked was a well-rested brain, so I found the problem rather quickly: It was a code change the dev manager had made to the way we handled cookies, which didn’t show a problem on the internal test server but broke the world on the real web servers. Perhaps my finding the issue was the only thing that saved me from getting a stern lecture. By the time I left years later, it was just a funny story the dev manager shared in my farewell speech, along with nice compliments about what I had achieved for the company — I also accepted an offer to work for the company again later.
    Breaking news: Human beings need sleep
    I am sure the two seniors would have been capable of spotting the problem under different circumstances. They had a lot working against them: Sleep deprivation, together with the miscommunication about who would be present, would’ve contributed to feelings of panic, which the outage would’ve exacerbated after they powered through and deployed without me. More importantly, they didn’t know whether the problem was in the new code or human error in their manual deployment process of copying zipped binaries and website files to multiple servers, manually updating config files, comparing and updating database schemas — all in the wee hours of the morning.
    They were sleepily searching for a needle in a haystack of their own making. The haystack wouldn’t have existed if they had a proven automated deployment process, and if they could be sure the problem could only reside in the code they deployed. There was no reason everything they were doing couldn’t be scripted. They could’ve woken up at 6 a.m. instead of 2 a.m. to verify the automated release of the website before shifting traffic to it and fix any problems that became evident in their release without disrupting real users. The company would get a more stable website and the expensive developers would have more time to focus on developing.
    If you manually deploy overnight, and then drive, you’re a bloody idiot
    The 2 a.m. deployments might seem funny if it wasn’t your night to attend and if you have a dark sense of humor. In the subsequent years, I attended many 2 a.m. deployments to atone for the one I slept through. The company paid for breakfast on those days, and if we proved the deployment was working, we could leave for the day and catch up on sleep, assuming we survived the drive home and didn’t end up sleeping forever.
    The manual deployment checklist was perpetually incomplete and out-of-date, yet the process was never blamed for snafus on deployment days. In reality, sometimes it was a direct consequence of the fallibility of manually working from an inaccurate checklist. Sometimes manual deployment wasn’t directly the culprit, but it made pinpointing the problem or deciding whether to roll back unnecessarily challenging. And you knew rolling back would mean forgoing your sleep again the next day so you’d have a mounting sleep debt working against you.
    I learned a lot from that team and the complex features I had the opportunity to build. But the deployment process was a step backward from my internship doing Windows programming because in that job I had to write installers so my code would work on user machines, which by nature of the task, I didn’t have access to. When web development removes that inherent limitation, it’s like a devil on your shoulder tempting you to do what seems easy in the moment and update production from your dev machine. You know you want to, especially when the full deployment process is hard and people want a fix straightaway. This is why if you automate deployments, you want to lock things down so that the automated process is the only way to deploy changes.
    As I became more senior and had more say in how these processes happened at my workplace, I started researching — and I found it easy to relate to the shots taken at manual deployers, such as this presentation titled “Octopus Deploy and how to stop deploying like an idiot” and Octopus Deploy founder Paul Stovell’s sentiments on how to deploy database updates: “Your database isn’t in source control? You don’t deserve one. Go use Excel.” This approach to giving developers a kick in their complacency reminds me of the long-running anti-drunk driving ads here in Australia with the slogan “If you drink then drive, you’re a bloody idiot,” which scared people straight by insulting them for destructive life choices.
    In the “Stop deploying like an idiot” talk, Damian Brady insults a hypothetical deployment manager at Acme Corp named Frank, who keeps being a hero by introducing risk and wasted time to a process that could be automated using Octopus, which would never make stupid mistakes like overwriting the config file.
    “Frank’s pretty proud of his process in general,” says Damian. “Frank’s an idiot.”
    Why are people like this?
    Frankly, some of the Franks I have worked with were something worse than idiots. Comedian Jim Jeffries has a bit in which he says he’d take a nice idiot over a clever bastard. Frank’s a cunning bastard wolf in idiotic sheep’s clothing — the demographic of people who work in software shows above average IQ, and a person appointed “deployment manager” will have googled the options to make this task easier, but he chose not to use them. The thing is, Frank gets to seem important, make other devs look and feel stupid when they try to follow his process while he’s on leave, and even when he is working he gets to take overseas trips to hang with clients because he is the only one who can get the product working on a new server. Companies must be careful which behaviors they reward, and Conway’s law applies to deployment processes.
    What I learned by being forced to do deployments manually
    To an extent, the process reflecting hierarchy and division of responsibility is normal and necessary, which is why Octopus Deploy has built-in manual intervention and approval steps. But also, some of the motivations to stick with manual deployments are nonsense. Complex manual deployments are still more widespread than they need to be, which makes me feel bad for the developers who still live like me back in the 2000s — if you call that living.
    I guess there is an argument for the team-building experiences in those 2 a.m. deployments, much like deployments in the military sense of the word may teach the troops some valuable life lessons, even if the purported reason for the battle isn’t the real reason, and the costs turn out to be higher than anyone expects.
    It reminds me of a tour I had the good fortune to take in 2023 of the Adobe San Jose offices, in which a “Photoshop floor” includes time capsule conference rooms representing different periods in Photoshop’s history, including a 90’s room with a working Macintosh Classic running Photoshop 1.0. The past is an interesting and instructive place to visit but not somewhere you’d want to live in 2025.
    Even so, my experience of Flintsones-style deployments gave me an appreciation for the ways a tool like Octopus Deploy automates everything I was forced to do manually in the past, which kept my motivation up when I was working through the teething problems once I was tasked with transforming a manual deployment process into an automated process. This appreciation for the value proposition of a tool like Octopus Deploy was why I later jumped at the opportunity to work for Octopus in 2021.
    What I learned working for Octopus Deploy
    The first thing I noticed was how friendly the devs were and the smoothness of the onboarding process, with only one small manual change to make the code run correctly in Docker on my dev box. The second thing I noticed was that this wasn’t heaven, and there were flaky integration tests, slow builds, and cake file output that hid the informative build errors. In fairness, at the time Octopus was in a period of learning how to upscale. There was a whole project I eventually joined to performance-tune the integration tests and Octopus itself. As an Octopus user, the product had seemed as close to magic as we were likely to find, compared to the hell we had to go through without a proper deployment tool. Yet there’s something heartening about knowing nobody has a flawless codebase, and even Octopus Deploy has some smelly code they have to deal with and suboptimal deployments of some stuff.
    Once I made my peace with the fact that there’s no silver bullet that magically and perfectly solves any aspect of software, including deployments, my hot take is that deploying like an idiot comes down to a mismatch between the tools you use to deploy and the reward in complexity reduced versus complexity added. Therefore, one example of deploying like an idiot is the story I opened with, in which team members routinely drove to the office at 2 a.m. to manually deploy a complicated website involving database changes, background processes, web farms, and SLAs. But another example of deploying like an idiot might be a solo developer with a side project who sets up Azure Devops to push to Octopus Deploy and pays more than necessary in money and cognitive load. Indeed, Octopus is a deceptively complex tool that can automate anything, not only deployments, but the complexity comes at the price of a learning curve and the risk of decision fatigue.
    For instance, when I used my “sharpening time” (the Octopus term for side-project time) to explore ways to deploy a JavaScript library, I found at least two different ways to do it in Octopus, depending on whether it’s acceptable to automate upgrading all your consumers to the latest version of your library or whether you need more control of versioning per consumer. Sidenote: the Angry Birds Octopus parody that Octopus marketing created to go with my “consumers of your JavaScript library as tenants” article was a highlight of my time at Octopus — I wish we could have made it playable like a Google Doodle.
    Nowadays I see automation as a spectrum for how automatic and sophisticated you need things to be, somewhat separate from the choice of tools. The challenge is locating that sweet spot, where automation makes your life easier versus the cost of licensing fees and the time and energy you need to devote to working on the deployment process. Octopus Deploy might be at one end of the spectrum of automated deployments when you need lots of control over a complicated automatic process. On the other end of the spectrum, the guy who runs Can I Use found that adopting git-ftp was a life upgrade from manually copying the modified files to his web server while keeping his process simple and not spending a lot of energy on more sophisticated deployment systems. Somewhere in the middle reside things like Bitbucket Pipelines or GitHub Actions, which are more automated and sophisticated than just git-ftp from your dev machine, but less complicated than Octopus together with TeamCity, which could be overkill on a simple project.
    The complexity of deployment might be something to consider when defining your architecture, similar to how planning poker can trigger a business to rethink the value of certain features once they obtain holistic feedback from the team on the overall cost. For instance, you might assume you need a database, but when you factor in the complexity it adds to roll-outs, you may be motivated to rethink whether your use case truly needs a database.
    What about serverless? Does serverless solve our problems given it’s supposed to eliminate the need to worry about how the server works?
    Reminder: Serverless isn’t serverless
    It should be uncontroversial to say that “serverless” is a misnomer, but how much this inaccuracy matters is debatable. I’ll give this analogy for why I think the name “serverless” is a problem: Early cars had a right to call themselves “horseless carriages” because they were a paradigm shift that meant your carriage could move without a horse. “Driverless cars” shouldn’t be called that, because they don’t remove the need for a driver; it’s just that the driver is an AI. “Self-driving car” is therefore a better name. Self-driving cars often work well, but completely ignoring the limitations of how they work can be fatal. When you unpack the term “serverless,” it’s like a purportedly horseless carriage still pulled by horse — but the driver claims his feeding and handling of the horse will be managed so well, the carriage will be so insulated from neighing and horse flatulence, passengers will feel as if the horse doesn’t exist. My counterargument is that the reality of the horse is bound to affect the passenger experience sooner or later.
    For example, one of my hobby projects was a rap battle chat deployed to Firebase. I needed the Firebase cloud function to calculate the score for each post using the same rhyme detection algorithm I used to power the front end. This worked fine in testing when I ran the Firebase function using the Cloud Functions emulator — but it performed unacceptably after my first deployment due to a cold start (loading the pronunciation dictionary was the likely culprit if you’re wondering). Much like my experiences in the 2000s, my code behaved dramatically differently on my dev machine than on the real Firebase, almost as though there is still a server I can’t pretend doesn’t exist — but now I had limited ability to tweak it. One way to fix it was to throw money at the problem.
    That serverless experience reminds me of a scene in the science fiction novel Rainbows End in which the curmudgeonly main character cuts open a car that isn’t designed to be serviced, only to find that all the components inside are labeled “No user-serviceable parts within.” He’s assured that even if he could cut open those parts, the car is “Russian dolls all the way down.” One of the other characters asks him: “Who’d want to change them once they’re made? Just trash ’em if they’re not working like you want.”
    I don’t want to seem like a curmudgeon — but my point is that while something like Firebase offers many conveniences and can simplify deployment and configuration, it can also move the problem to knowing which services are appropriate to pay extra for. And you may find your options are limited when things go wrong with a deployment or any other part of web development.
    Deploying this article
    Since I love self-referential twist endings, I’ll point out that even publishing an article like this has a variety of possible “deployment processes.” For instance, Octopus uses Jekyll for their blog. You make a branch with the markdown of your proposed blog post, and then marketing proposes changes before setting a publication date and merging. The relevant automated process will handle publication from there. This process has the advantage of using familiar tools for collaborating on changes to a file — but it might not feel approachable to teams not comfortable with Git, and it also might not be immediately apparent how to preview the final article as it will appear on the website.
    On the other hand, when I create an article for CSS-Tricks, I use Dropbox Paper to create my initial draft, then send it to Geoff Graham, who makes edits, for which I get notifications. Once we have confirmed via email that we’re happy with the article, he manually ports it to Markdown in WordPress, then sends me a link to a pre-live version on the site to check before the article is scheduled for publication. It’s a manual process, so I sometimes find problems even in this “release” of static content collaborated by only two people — but you gotta weigh how much risk there is of mistakes against how much value there would be in fully automating the process. With anything you have to publish on the web, keep searching for that sweet spot of elegance, risk, and the reward-to-effort ratio.
    Feeling Like I Have No Release: A Journey Towards Sane Deployments originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  12. Blogger
    by: Team LHB
    Mon, 07 Apr 2025 17:16:55 +0530

    After years of training DevOps students and taking interviews for various positions, I have compiled this list of Docker interview ques tions (with answers) that are generally asked in the technical round.
    I have categorized them into various levels:
    Entry level (very basic Docker questions) Mid-level (slightly deep in Docker) Senior-level (advanced level Docker knowledge) Common for all (generic Docker stuff for all) Practice Dockerfile examples with optimization challenge (you should love this) If you are absolutely new to Docker, I highly recommend our Docker course for beginners.
    Learn Docker: Complete Beginner’s CourseLearn Docker, an important skill to have for any DevOps and modern sysadmin. Learn all the essentials of Docker in this series.Linux HandbookAbdullah TarekLet's go.
    Entry level Docker questions
    What is Docker?
    Docker is a containerization platform that allows you to package an application and its dependencies into a container. Unlike virtualization, Docker containers share the host OS kernel, making them more lightweight and efficient.
    What is Containerization?
    It’s a way to package software in a format that can run isolated on a shared OS.
    What are Containers?
    Containers are packages which contains application with all its needs such as libraries and dependencies
    What is Docker image?
    Docker image is a read-only template that contains a set of instructions for creating a container that can run on the Docker platform. It provides a convenient way to package up applications and preconfigured server environments, which you can use for your own private use or share publicly with other Docker users. What is Docker Compose?
    It is a tool for defining and running multi-container Docker applications.
    What’s the difference between virtualization and containerization?
    Virtualization abstracts the entire machine with separate VMs, while containerization abstracts the application with lightweight containers sharing the host OS.
    Describe a Docker container’s lifecycle
    Create | Run | Pause | Unpause | Start | Stop | Restart | Kill | Destroy
    What is a volume in Docker, and which command do you use to create it?
    A volume in Docker is a persistent storage mechanism that allows data to be stored and accessed independently of the container's lifecycle. Volumes enable you to share data between containers or persist data even after a container is stopped or removed. docker volume create <volume_name>TO KNOW : example :- docker run -v data_volume:/var/lib/mysql mysqlWhat is Docker Swarm?
    Docker Swarm is a tool for clustering & managing containers across multiple hosts.
    How do you remove unused data in Docker?
    Use docker system prune to remove unused data, including stopped containers, unused networks, and dangling images.
    Mid-level Docker Questions
    What command retrieves detailed information about a Docker container?
    Use docker inspect <container_id> to get detailed JSON information about a specific Docker container.
    How do the Docker Daemon and Docker Client interact?
    The Docker Client communicates with the Docker Daemon through a REST API over a Unix socket or TCP/IP
    How can you set CPU and memory limits for a Docker container?
    Use docker run --memory="512m" --cpus="1.5" <image> to set memory and CPU limits.
    Can a Docker container be configured to restart automatically?
    Yes, a Docker container can be configured to restart automatically using restart policies such as --restart always or --restart unless-stopped.
    What methods can you use to debug issues in a Docker container?
    Inspect logs with docker logs <container_id> to view output and error messages. Execute commands interactively using docker exec -it <container_id> /bin/bash to access the container's shell. Check container status and configuration with docker inspect <container_id>. Monitor resource usage with docker stats to view real-time performance metrics. Use Docker's built-in debugging tools and third-party monitoring solutions for deeper analysis. What is the purpose of Docker Secrets?
    Docker Secrets securely manage sensitive data like passwords for Docker services. Use docker secret create <secret_name> <file> to add secrets.
    What are the different types of networks in Docker, and how do they differ?
    Docker provides several types of networks to manage how containers communicate with each other and with external systems.
    Here are the main types:
    Bridge None Host Overlay Network Macvlan Network IPvlan Network bridge: This is the default network mode. Each container connected to a bridge network gets its own IP address and can communicate with other containers on the same bridge network using this IP
    docker run ubuntuUseful for scenarios where you want isolated containers to communicate through a shared internal network.
    none: Containers attached to the none network are not connected to any network. They don't have any network interfaces except the loopback interface (lo).
    docker run ubuntu --network=noneUseful when you want to create a container with no external network access for security reasons.
    host: The container shares the network stack of the Docker host, which means it has direct access to the host's network interfaces. There's no isolation between the container and the host network.
    docker run ubuntu --network=hostUseful when you need the highest possible network performance, or when you need the container to use a service on the host system.
    Overlay Network : Overlay networks connect multiple Docker daemons together, enabling swarm services to communicate with each other. It's used in Docker Swarm mode for multi-host networking.
    docker network create -d overlay my_overlay_networkUseful for distributed applications that span multiple hosts in a Docker Swarm.
    Macvlan Network : Macvlan networks allow you to assign a MAC address to a container, making it appear as a physical device on the network. The container can communicate directly with the physical network using its own IP address.
    docker network create -d macvlan --subnet=192.168.1.0/24 --gateway=192.168.1.1 -o parent=eth0 my_macvlan_networkUseful when you need containers to appear as physical devices on the network and need full control over the network configuration.
    IPvlan Network: Similar to Macvlan, but uses different methods to route packets. It's more lightweight and provides better performance by leveraging the Linux kernel's built-in network functionalities.
    docker network create -d ipvlan --subnet=192.168.1.0/24 --gateway=192.168.1.1 -o parent=eth0 my_ipvlan_networkUseful for scenarios where you need low-latency, high-throughput networking with minimal overhead.
    Explain the main components of Docker architecture
    Docker consists of the Docker Host, Docker Daemon, Docker Client, and Docker Registry.
    The Docker Host is the computer (or server) where Docker is installed and running. It's like the home for Docker containers, where they live and run. The Docker Daemon is a background service that manages Docker containers on the Docker Host. It's like the manager of the Docker Host, responsible for creating, running, and monitoring containers based on instructions it receives. The Docker Client communicates with the Docker Daemon, which manages containers. The Docker Registry stores and distributes Docker images. How does a Docker container differ from an image?
    A Docker image is a static, read-only blueprint, while a container is a running instance of that image. Containers are dynamic and can be modified or deleted without affecting the original image.
    Explain the purpose of a Dockerfile.
    Dockerfile is a script containing instructions to build a Docker image. It specifies the base image, sets up the environment, installs dependencies, and defines how the application should run.
    How do you link containers in Docker?
    Docker provides network options to enable communication between containers. Docker Compose can also be used to define and manage multi-container applications.
    How can you secure a Docker container?
    Container security involves using official base images, minimizing the number of running processes, implementing least privilege principles, regularly updating images, and utilizing Docker Security Scanning tools. ex. Docker vulnerability scanning.
    Difference between ARG & ENV?
    ARG is for build-time variables, and its scope is limited to the build process. ENV is for environment variables, and its scope extends to both the build process and the running container. Difference between RUN, ENTRYPOINT & CMD?
    RUN : Executes a command during the image build process, creating a new image layer. ENTRYPOINT : Defines a fixed command that always runs when the container starts. Note : using --entrypoint we can overridden at runtime. CMD : Specifies a default command or arguments that can be overridden at runtime. Difference between COPY & ADD?
    If you are just copying local files, it's often better to use COPY for simplicity. Use ADD when you need additional features like extracting compressed archives or pulling resources from URLs. How do you drop the MAC_ADMIN capability when running a Docker container?
    Use the --cap-drop flag with the docker run command:
    docker run --cap-drop MAC_ADMIN ubuntuHow do you add the NET_BIND_SERVICE capability when running a Docker container?
    Use the --cap-drop flag with the docker run command:
    docker run --cap-add NET_BIND_SERVICE ubuntuHow do you run a Docker container with all privileges enabled?
    Use the --privileged flag with the docker run command:
    docker run --privileged ubuntu  
     
      This post is for subscribers only
    Subscribe now Already have an account? Sign in
  13. Blogger
    by: Abhishek Kumar
    Sat, 05 Apr 2025 06:40:23 GMT

    There was a time when coding meant painstakingly writing every line, debugging cryptic errors at 3 AM, and pretending to understand regex. But in 2025? Coding has evolved, or rather, it has vibed into something entirely new.
    Enter Vibe Coding, a phenomenon where instead of manually structuring functions and loops, you simply tell AI what you want, and it does the hard work for you.
    This approach has taken over modern software development. Tools like Cursor and Windsurf, AI-powered code editors built specifically for this new workflow, are helping developers create entire applications without in-depth coding knowledge.
    Gone are the days of memorizing syntax. Now, you can describe an app idea in plain English, and AI will generate, debug, and even refactor the code for you.
    At first, it sounded too good to be true. But then people started launching SaaS businesses with nothing but Vibe Coding, using AI to write everything from landing pages to backend logic.
    We thought, since the future of coding is AI-assisted, you’ll need the right tools to make the most of it.
    So, here’s a handpicked list of the best code editors for vibe coding in 2025, designed to help you turn your wildest ideas into real projects, fast. 💨
    🚧NON-FOSS Warning: Not all the editors mentioned in this article are open source. While some are, many of the AI-powered features provided by these tools rely on cloud services that often include a free tier, but are not entirely free to use. AI compute isn't cheap! When local LLM support is available, I've made sure to mention it specifically. Always check the official documentation or pricing page before diving in.1. Zed
    If VS Code feels sluggish and Cursor is a bit too heavy on the vibes, then Zed might just be your new favorite playground.
    Written entirely in Rust, Zed is built for blazing fast speed. It’s designed to utilize multiple CPU cores and your GPU, making every scroll, search, and keystroke snappy as heck.
    And while it's still a relatively new player in the editor world, the Zed team is laser-focused on building the fastest, most seamless AI-native code editor out there.
    You get full AI interaction built right into the editor, thanks to the Assistant Panel and inline assistants that let you refactor, generate, and edit code using natural language, without leaving your flow.
    Want to use Claude 3.5, a self-hosted LLM via Ollama, or something else? Zed’s open API lets you plug in what works for you.
    Key Features:
    ✅ Built entirely in Rust for extreme performance and low latency.
    ✅ Native AI support with inline edits, slash commands, and fast refactoring.
    ✅ Assistant Panel for controlling AI interactions and inspecting suggestions.
    ✅ Plug-and-play LLM support, including Ollama and Claude via API.
    ✅ Workflow Commands to automate complex tasks across multiple files.
    ✅ Custom Slash Commands with WebAssembly or JSON for tailored AI workflows.
    Zed AI2. Flexpilot IDE
    Flexpilot IDE joins the growing league of open-source, AI-native code editors that prioritize developer control and privacy.
    Forked from VS Code, it's designed to be fully customizable, letting you bring your own API keys or run local LLMs (like via Ollama) for a more private and cost-effective AI experience.
    Much like Zed, it takes a developer-first approach: no locked-in services, no mysterious backend calls. Just a clean, modern editor that plays nice with whatever AI setup you prefer.
    Key Features
    ✅ AI-powered autocomplete with context-aware suggestions
    ✅ Simultaneously edit multiple files in real-time with AI assistance
    ✅ Ask code-specific questions in a side panel for instant guidance
    ✅ Refactor, explain, or improve code directly in your files
    ✅ Get instant AI help with a keyboard shortcut, no interruptions
    ✅ Talk to your editor and get code suggestions instantly
    ✅ Run commands and debug with AI assistance inside your terminal
    ✅ Reference code elements and editor data precisely
    ✅ AI-powered renaming of variables, functions, and classes
    ✅ Generate commit messages and PR descriptions in a click
    ✅ Track token consumption across AI interactions
    ✅ Use any LLM: OpenAI, Claude, Mistral, or local Ollama
    ✅ Compatible with GitHub Copilot and other VSCode extensions
    Flexpilot3. VS Code with GitHub Copilot
    While GitHub Copilot isn’t a standalone code editor, it’s deeply integrated into Visual Studio Code, which makes sense since Microsoft owns both GitHub and VS Code.
    As one of the most widely used AI coding assistants, Copilot provides real-time AI-powered code suggestions that adapt to your project’s context.
    Whether you’re writing Python scripts, JavaScript functions, or even Go routines, Copilot speeds up development by generating entire functions, automating repetitive tasks, and even debugging your code.
    Key Features:
    ✅ AI-driven code suggestions in real-time.
    ✅ Supports multiple languages, including Python, JavaScript, and Go.
    ✅ Seamless integration with VS Code, Neovim, and JetBrains IDEs.
    ✅ Free for students and open-source developers.
    GitHub Copilot4. Pear AI
    Pear AI is a fork of VSCode, built with AI-first development in mind. It’s kinda like Cursor or Windsurf, but with a twist, you can plug in your own AI server, run local models via Ollama (which is probably the easiest route), or just use theirs.
    It has autocomplete, context-aware chat, and a few other handy features.
    Now, full transparency, it's still a bit rough around the edges. Not as polished, a bit slow at times, and the updates? Eh, not super frequent.
    The setup can feel a little over-engineered if you’re just trying to get rolling. But… I see potential here. If the right devs get their hands on it, this could shape up into something big.
    Key Features
    ✅ VSCode-based editor with a clean UI and familiar feel
    ✅ "Knows your code" – context-aware chat that actually understands your project
    ✅ Works with remote APIs or local LLMs (Ollama integration is the easiest)
    ✅ Built-in AI code generation tools curated into a neat catalog
    ✅ Autocomplete and inline code suggestions, powered by your model of choice
    ✅ Ideal for devs experimenting with custom AI backends or local AI setups
    Pear AI5. Fleet by JetBrains
    If you've ever written Java, Python, or even Kotlin, chances are you’ve used or at least heard of JetBrains IDEs like IntelliJ IDEA, PyCharm, or WebStorm.
    JetBrains has long been the gold standard for feature-rich developer environments.
    Now, they're stepping into the future of coding with Fleet, a modern, lightweight, and AI-powered code editor designed to simplify your workflow while keeping JetBrains' signature intelligence baked in.
    Fleet isn’t trying to replace IntelliJ, it’s carving a space of its own: minimal UI, fast startup, real-time collaboration, and enough built-in tools to support full-stack projects out of the box.
    And with JetBrains’ new AI assistant baked in, you're getting contextual help, code generation, and terminal chat, all without leaving your editor.
    Key Features
    ✅ Designed for fast startup and low memory usage without sacrificing features
    ✅ Full-Stack Language Support- Java, Kotlin, JavaScript, TypeScript, Python, Go, and more
    ✅ Real-Time Collaboration.
    ✅ Integrated Git Tools like Diff viewer, branch management, and seamless commits
    ✅ Use individual or shared terminals in collaborative sessions
    ✅ Auto-generate code, fix bugs, or chat with your terminal
    ✅ Docker & Kubernetes Support - Manage containers right inside your IDE
    ✅ Preview, format, and edit Markdown files with live previews
    ✅ Custom themes, keymaps, and future language/tech support via plugins
    Fleet6. Cursor
    Cursor is a heavily modified fork of VSCode with deep AI integration. It supports multi-file editing, inline chat, autocomplete for code, markdown, and even JSON.

    It’s fast, responsive, and great for quickly shipping out tutorials or apps. You also get terminal autocompletion and contextual AI interactions right in your editor.
    Key Features
    ✅ Auto-imports and suggestions optimized for TypeScript and Python
    ✅ Generate entire app components or structures with a single command
    ✅ Context-gathering assistant that can interact with your terminal
    ✅ Drag & drop folders for AI-powered explanations and refactoring
    ✅ Process natural language commands inside the terminal
    ✅ AI detects issues in your code and suggests fixes
    ✅ Choose from GPT-4o, Claude 3.5 Sonnet, o1, and more
    Cursor7. Windsurf (Previously Codeium)
    Windsurf takes things further with an agentic approach, it can autonomously run scripts, check outputs, and continue building based on the results until it fulfills your request.

    Though it’s relatively new, Windsurf shows massive promise with smooth performance and smart automation packed into a familiar development interface.
    Built on (you guessed it) VS Code, Windsurf is crafted by Codeium and introduces features like Supercomplete and Cascade, focusing on deep workspace understanding and intelligent, real-time code generation.
    Key Features
    ✅ SuperComplete for context-aware, full-block code suggestions across your entire project
    ✅ Real-time chat assistant for debugging, refactoring, and coding help across languages
    ✅ Command Palette with custom commands.
    ✅ Cascade feature for syncing project context and iterative problem-solving
    ✅ Flow tech for automatic workspace updates and intelligent context awareness
    ✅ Supports top-tier models like GPT-4o, Claude 3.5 Sonnet, LLaMA 3.1 70B & 405B
    It’s still new but shows a lot of promise with smooth performance and advanced automation capabilities baked right in.
    Windsurf AIFinal thoughts
    I’ve personally used GitHub Copilot’s free tier quite a bit, and recently gave Zed AI a spin and I totally get why the internet is buzzing with excitement.
    There’s something oddly satisfying about typing a few lines of instruction and then just... letting your editor take over while you lean back.
    That said, I’ve also spent hours untangling some hilariously off-mark Copilot-generated bugs. So yeah, it’s powerful, but far from perfect.
    If you’re just stepping into the AI coding world, don’t dive in blind. Take time to learn the basics, experiment with different editors and assistants, and figure out which one actually helps you ship code your way.
    And if you're already using an AI editor you swear by, let us know in the comments. Always curious to hear what other devs are using.
  14. Blogger

    A New “Web” Readiness Report

    by: Juan Diego Rodríguez
    Fri, 04 Apr 2025 13:05:22 +0000

    The beauty of research is finding yourself on a completely unrelated topic mere minutes from opening your browser. It happened to me while writing an Almanac entry on @namespace, an at-rule that we probably won’t ever use and is often regarded as a legacy piece of CSS. Maybe that’s why there wasn’t a lot of info about it until I found a 2010s post on @namespace by Divya Manian. The post was incredibly enlightening, but that’s beside the point; what’s important is that in Divya’s blog, there were arrows on the sides to read the previous and next posts:
    Don’t ask me why, but without noticing, I somehow clicked the left arrow twice, which led me to a post on “Notes from HTML5 Readiness Hacking.”
    What’s HTML 5 Readiness?!
    HTML 5 Readiness was a site created by Paul Irish and Divya Manian that showed the browser support for several web features through the lens of a rainbow of colors. The features were considered (at the time) state-of-the-art or bleeding-edge stuff, such as media queries, transitions, video and audio tags, etc. As each browser supported a feature, a section of the rainbow would be added.
    I think it worked from 2010 to 2013, although it showed browser support data from 2008. I can’t describe how nostalgic it made me feel; it reminded me of simpler times when even SVGs weren’t fully supported. What almost made me shed a tear was thinking that, if this tool was updated today, all of the features would be colored in a full rainbow.
    A new web readiness
    It got me thinking: there are so many new features coming to CSS (many that haven’t shipped to any browser) that there could be a new HTML5 Readiness with all of them. That’s why I set myself to do exactly that last weekend, a Web Readiness 2025 that holds each of the features coming to HTML and CSS I am most excited about.
    You can visit it at webreadiness.com!
    Right now, it looks kinda empty, but as time goes we will hopefully see how the rainbow grows:
    Even though it was a weekend project, I took the opportunity to dip my toes into a couple of things I wanted to learn. Below are also some snippets I think are worth sharing.
    The data is sourced from Baseline
    My first thought was to mod the <baseline-status> web component made by the Chrome team because I have been wanting to use it since it came out. In short, it lets you embed the support data for a web feature directly into your blog. Not long ago, in fact, Geoff added it as a WordPress block in CSS-Tricks, which has been super useful while writing the Almanac:
    However, I immediately realized that using the <baseline-status> would be needlessly laborious, so I instead pulled the data from the Web Features API — https://api.webstatus.dev/v1/features/ — and displayed it myself. You can find all the available features in the GitHub repo.
    Each ray is a web component
    Another feature I have been wanting to learn more about was Web Components, and since Geoff recently published his notes on Scott Jehl’s course Web Components Demystified, I thought it was the perfect chance. In this case, each ray would be a web component with a simple live cycle:
    Get instantiated. Read the feature ID from a data-feature attribute. Fetch its data from the Web Features API. Display its support as a list. Said and done! The simplified version of that code looks something like the following:
    class BaselineRay extends HTMLElement { constructor() { super(); } static get observedAttributes() { return ["data-feature"]; } attributeChangedCallback(property, oldValue, newValue) { if (oldValue !== newValue) { this[property] = newValue; } } async #fetchFeature(endpoint, featureID) { // Fetch Feature Function } async connectedCallback() { // Call fetchFeature and Output List } } customElements.define("baseline-ray", BaselineRay); Animations with the Web Animation API
    I must admit, I am not too design-savvy (I hope it isn’t that obvious), so what I lacked in design, I made up with some animations. When the page initially loads, a welcome animation is easily achieved with a couple of timed keyframes. However, the animation between the rainbow and list layouts is a little more involved since it depends on the user’s input, so we have to trigger them with JavaScript.
    At first, I thought it would be easier to do them with Same-Document View Transitions, but I found myself battling with the browser’s default transitions and the lack of good documentation beyond Chrome’s posts. That’s why I decided on the Web Animation API, which lets you trigger transitions in a declarative manner.
    sibling-index() and sibling-count()
    A while ago, I wrote about the sibling-index() and sibling-count() functions. As their names imply, they return the current index of an element among its sibling, and the total amount of siblings, respectively. While Chrome announced its intent to ship both functions, I know it will be a while until they reach baseline support, but I still needed them to rotate and move each ray.
    In that same post, I talked about three options to polyfill each function. The first two were CSS-only, but this time I took the simplest JavaScript way which observes the number of rays and adds custom properties with its index and total count. Sure, it’s a bit overkill since the amount of rays doesn’t change, but pretty easy to implement:
    const elements = document.querySelector(".rays"); const updateCustomProperties = () => { let index = 0; for (let element of elements.children) { element.style.setProperty("--sibling-index", index); index++; } elements.style.setProperty("--sibling-count", elements.children.length - 1); }; updateCustomProperties(); const observer = new MutationObserver(updateCustomProperties); const config = {attributes: false, childList: true, subtree: false}; observer.observe(elements, config); With this, I could position each ray in a 180-degree range:
    baseline-ray ul{ --position: calc(180 / var(--sibling-count) * var(--sibling-index) - 90); --rotation: calc(var(--position) * 1deg); transform: translateX(-50%) rotate(var(--rotation)) translateY(var(--ray-separation)); transform-origin: bottom center; } The selection is JavaScript-less
    In the browser captions, if you hover over a specific browser, that browser’s color will pop out more in the rainbow while the rest becomes a little transparent. Since in my HTML, the caption element isn’t anyway near the rainbow (as a parent or a sibling), I thought I would need JavaScript for the task, but then I remembered I could simply use the :has() selector.
    It works by detecting whenever the closest parent of both elements (it could be <section>, <main>, or the whole <body>) has a .caption item with a :hover pseudo-class. Once detected, we increase the size of each ray section of the same browser, while decreasing the opacity of the rest of the ray sections.
    CodePen Embed Fallback What’s next?!
    What’s left now is to wait! I hope people can visit the page from time to time and see how the rainbow grows. Like the original HTML 5 Readiness page, I also want to take a snapshot at the end of the year to see how it looks until each feature is fully supported. Hopefully, it won’t take long, especially seeing the browser’s effort to ship things faster and improve interoperability.
    Also, let me know if you think a feature is missing! I tried my best to pick exciting features without baseline support.
    View the report A New “Web” Readiness Report originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  15. Blogger
    by: Abhishek Prakash
    Thu, 03 Apr 2025 04:28:54 GMT

    Linux distributions agreeing to a single universal packaging system? That sounds like a joke, right? That's because it is.
    It's been a tradition of sort to prank readers on 1st of April with a humorous article. Since we are already past the 1st April in all time zones, let me share this year's April Fool article with you. I hope you find it as amusing as I did while writing it 😄
    No Snap or FlatPak! Linux Distros Agreed to Have Only One Universal PackagingIs this the end of fragmentation for Linux?It's FOSS NewsAbhishek💬 Let's see what else you get in this edition
    Vivaldi offering free built-in VPN. Tools to enhance AppImage experience. Serpent OS going through a rebranding. And other Linux news, tips, and, of course, memes! This edition of FOSS Weekly is supported by Typesense. ❇️ Typesense: Open Source Search Engine
    Typesense is the free, open-source search engine for forward-looking devs. Make it easy on people: Tpyos? Typesense knows we mean typos, and they happen. With ML-powered typo tolerance and semantic search, Typesense helps your customers find what they’re looking for—fast.
    Check them out on GitHub.
    GitHub - typesense/typesense: Open Source alternative to Algolia + Pinecone and an Easier-to-Use alternative to ElasticSearch ⚡ 🔍 ✨ Fast, typo tolerant, in-memory fuzzy Search Engine for building delightful search experiencesOpen Source alternative to Algolia + Pinecone and an Easier-to-Use alternative to ElasticSearch ⚡ 🔍 ✨ Fast, typo tolerant, in-memory fuzzy Search Engine for building delightful search experiences -…GitHubtypesense📰 Linux and Open Source News
    Vivaldi has teamed up with Proton VPN to provide an in-browser VPN. Serpent OS is now called AerynOS, and the first release is already here. GoboLinux has had a change in leadership, with a new release coming after a five-year gap. Proton now offers more features in its Drive and Docs app. 🧠 What We’re Thinking About
    Thank goodness Linux saves us from this 🤷
    New Windows 11 build makes mandatory Microsoft Account sign-in even more mandatory“Bypassnro” is an easy MS Account workaround for Home and Pro Windows editions.Ars TechnicaAndrew Cunningham🧮 Linux Tips, Tutorials and More
    Move away from Google Photos and self-host a privacy-focused solution instead. Window managers on Linux allow you to organize your windows and make use of screen space efficiently. Fed up with Netflix streaming in SD quality? You can make it play Full-HD content on Firefox by using a neat trick. Love AppImage? These tools will help you improve your AppImage experience.
    5 Tools to Enhance Your AppImage Experience on LinuxLove using AppImages but hate the mess? Check out these handy tools that make it super easy to organize, update, and manage AppImages on your Linux system.It's FOSSSreenath👷 Homelab and Maker's Corner
    Don't lose knowledge! Self-host your own Wikipedia or Arch Wiki:
    Taking Knowledge in My Own Hands By Self Hosting Wikipedia and Arch WikiDoomsday or not, knowledge should be preserved.It's FOSSAbhishek Kumar✨ Apps Highlight
    Find yourself often forgetting things? Then you might need a reminder app like Tasks.org.
    Ditch Proprietary Reminder Apps, Try Tasks.org InsteadStay organized with Tasks.org, an open source to-do and reminders app that doesn’t sell your data.It's FOSS NewsSourav Rudra📽️ Videos I am Creating for You
    I tested COSMIC alpha on Fedora 42 beta in the latest video. And I have taken some of the feedback to improve the audio quality in this one.
    Subscribe to It's FOSS YouTube Channel🧩 Quiz Time
    Can you solve this riddle?
    Riddler’s Back: Open-Source App QuizGuess the open-source applications following the riddles.It's FOSSAnkush DasAfter you are done with that, you can try your hand at matching Linux apps with their roles.
    💡 Quick Handy Tip
    In KDE Plasma, you can edit copied texts in the Clipboard. First, launch the clipboard using the shortcut CTRL+V. Now, click on the Edit button, which looks like a pencil.
    Then, edit the contents and click on Save to store it as a new clipboard item.
    🤣 Meme of the Week
    Such a nice vanity plate. 😮
    🗓️ Tech Trivia
    On March 31, 1939, Harvard and IBM signed an agreement to build the Mark I, also known as the IBM Automatic Sequence Controlled Calculator (ASCC).
    This pioneering electromechanical computer, conceived by Howard Aiken, interpreted instructions from paper tape and data from punch cards, playing a significant role in World War II calculations.
    🧑‍🤝‍🧑 FOSSverse Corner
    FOSSers are discussing which is the most underrated Linux distribution out there. Care to share your views?
    What is the most underrated Linux distribution?There are some distros like Debian, Ubuntu and Mint that are commonly used and everyone knows how good they are. but There are others that are used only by a few people and perform equally as well. Would you like to nominate your choice for the most underrated Linux distro? I will nominate Void Linux… it is No 93 on distrowatch and performs for me as well as MX Linux or Debian.It's FOSS Communitynevj❤️ With love
    Share it with your Linux-using friends and encourage them to subscribe (hint: it's here).
    Share the articles in Linux Subreddits and community forums.
    Follow us on Google News and stay updated in your News feed.
    Opt for It's FOSS Plus membership and support us 🙏
    Enjoy FOSS 😄
  16. Blogger

    SMIL on?

    by: Geoff Graham
    Wed, 02 Apr 2025 12:37:19 +0000

    I was chatting with Andy Clarke the other day about a new article he wants to write about SVG animations.
    “I’ve read some things that said that SMIL might be a dead end.” He said. “Whaddya think?”
    That was my impression, too. Sarah Drasner summed up the situation nicely way back in 2017:
    Chrome was also in on the party and published an intent to deprecate SMIL, citing work in other browsers to support SVG animations in CSS. MDN linked to that same thread in its SMIL documentation when it published a deprecation warning.
    Well, Chrome never deprecated SMIL. At least according to this reply in the thread dated 2023. And since then, we’ve also seen Microsoft’s Edge adopt a Chromium engine, effectively making it a Chrome clone. Also, last I checked, Caniuse reports full support in WebKit browsers.
    This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.
    Desktop
    ChromeFirefoxIEEdgeSafari5411796Mobile / Tablet
    Android ChromeAndroid FirefoxAndroidiOS Safari13413636.0-6.1 Now, I’m not saying that SMIL is perfectly alive and well. It could still very well be in the doldrums, especially when there are robust alternatives in CSS and JavaScript. But it’s also not dead in the water.

    SMIL on? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  17. Blogger
    by: Sreenath
    Wed, 02 Apr 2025 10:50:07 GMT

    The portable AppImage format is quite popular among developers and users alike. It allows you to run applications without installation or dependency issues, on virtually any Linux distribution.
    However, managing multiple AppImages or keeping them updated can sometimes be a bit cumbersome. Fortunately, there are third-party tools that simplify the process, making it easier to organize, update, and integrate AppImages into your Linux system.
    In this article, I’ll share some useful tools that can help you manage AppImages more effectively and enhance your overall experience.
    Gear Lever
    Gear Lever is a modern GTK-based application that lets you manage your local AppImage files. It primarily helps you organize AppImages by adding desktop entries, updating applications, and more.
    Installed AppImages in Gear LeverFeatures of Gear Lever
    Drag and drop files directly from your file manager Update apps in place Keep multiple versions installed Install Gear Lever
    Gear Lever is available as a Flatpak package. You can install it with the following command:
    flatpak install flathub it.mijorus.gearleverGear LeverAppImage Launcher
    📋While the last release of AppImage Launcher was a few years ago, it works pretty fine.If you're a frequent user of AppImage packages, you should definitely check out AppImage Launcher. This open-source tool helps integrate AppImages into your system.
    It allows users to quickly add AppImages to the application menu, manage updates, and remove them with just a few clicks.
    AppImage LauncherFeatures of AppImage Launcher
    Adds desktop integration to AppImage files Includes a helper tool to manage AppImage updates Allows easy removal of AppImages Provides CLI tools for terminal-based operations and automation Install AppImage Launcher
    For Ubuntu users, the .deb file is available under the Continuous build section on the releases page.
    AppImage LauncherAppImage Package Manager and AppMan
    AppImage Package Manager (AM) is designed to simplify AppImage management, functioning similarly to how APT or DNF handle native packages. It supports not just AppImages, but other portable formats as well.
    AM relies on a large database of shell scripts, inspired by the Arch User Repository (AUR), to manage AppImages from various sources.
    A similar tool is AppMan. It is basically AM but manages all your apps locally without needing root access.
    If you are a casual user, you can use AppMan instead of AM so that everything will be local and no need for any sudo privileges.
    AppImage Package Manager (AppMan Version)
    Features of AppImage Package Manager
    Supports AppImages and standalone archives (e.g., Firefox, Blender) Includes a comprehensive shell script database for official and community-sourced AppImages Create and restore snapshots Drag-and-drop AppImage integration Convert legacy AppImage formats Install AppImage Package Manager
    To install, run the following commands:
    wget -q https://raw.githubusercontent.com/ivan-hc/AM/main/AM-INSTALLER && chmod a+x ./AM-INSTALLER && ./AM-INSTALLERThe installer will prompt you to choose between AM and AppMan. Choose AppMan if you prefer local, privilege-free management.
    AppImage Package ManagerAppImagePool
    AppImagePool is a Flutter-based client for AppImage Hub. It offers a clean interface to browse and download AppImages listed on AppImage Hub.
    AppImage Pool client home pageFeatures of AppImagePool
    Categorized list of AppImages Download from GitHub directly, no extra-server involved Integrate and Disintegrate AppImages easily from your system Version History and multi download support Installing AppImage Pool
    Download the AppImage file from the official GitHub releases page.
    Download AppImage PoolThere is a Flatpak package is available to install from Flathub. If your system has Flatpak support, use the command:
    flatpak install flathub io.github.prateekmedia.appimagepoolZap
    📋The last release of Zap was a few years ago but it worked fine in my testing.Zap is an AppImage package manager written in Go. It allows you to install, update, and integrate AppImage packages efficiently.
    0:00 /0:37 1× Zap AppImage package Manager
    Features of Zap
    Install packages from the AppImage catalog using registered names Select and install specific versions Use the Zap daemon for automatic update checks Install AppImages from GitHub releases Install Zap
    To install Zap locally, run:
    curl https://raw.githubusercontent.com/srevinsaju/zap/main/install.sh | bash -sFor a system-wide installation, run:
    curl https://raw.githubusercontent.com/srevinsaju/zap/main/install.sh | sudo bash -sZapIn the end...
    Here are a few more resources that an AppImage lover might like:
    Bauh package manager: bauh is a graphical interface for managing various Linux package formats like AppImage, Deb, Flatpak, etc. XApp-Thumbnailers: This is a thumbnail generation tool for popular file managers. Awesome AppImage: Lists several AppImage tools and resources. AppImage is a fantastic way to use portable applications on Linux, but managing them manually can be tedious over time. Thankfully, the tools mentioned above make it easier to organize, update, and integrate AppImages into your workflow.
    From a feature-rich GUI tool like Gear Lever to CLI tools like AppImagePool and AppMan, there’s something here for every kind of user. Try out a few and see which one fits your style best.
  18. Blogger
    by: Bryan Robinson
    Tue, 01 Apr 2025 13:50:58 +0000

    I’m a big fan of Astro’s focus on developer experience (DX) and the onboarding of new developers. While the basic DX is strong, I can easily make a convoluted system that is hard to onboard my own developers to. I don’t want that to happen.
    If I have multiple developers working on a project, I want them to know exactly what to expect from every component that they have at their disposal. This goes double for myself in the future when I’ve forgotten how to work with my own system!
    To do that, a developer could go read each component and get a strong grasp of it before using one, but that feels like the onboarding would be incredibly slow. A better way would be to set up the interface so that as the developer is using the component, they have the right knowledge immediately available. Beyond that, it would bake in some defaults that don’t allow developers to make costly mistakes and alerts them to what those mistakes are before pushing code!
    Enter, of course, TypeScript. Astro comes with TypeScript set up out of the box. You don’t have to use it, but since it’s there, let’s talk about how to use it to craft a stronger DX for our development teams.
    Watch
    I’ve also recorded a video version of this article that you can watch if that’s your jam. Check it out on YouTube for chapters and closed captioning.
    Setup
    In this demo, we’re going to use a basic Astro project. To get this started, run the following command in your terminal and choose the “Minimal” template.
    npm create astro@latest This will create a project with an index route and a very simple “Welcome” component. For clarity, I recommend removing the <Welcome /> component from the route to have a clean starting point for your project.
    To add a bit of design, I’d recommend setting up Tailwind for Astro (though, you’re welcome to style your component however you would like including a style block in the component).
    npx astro add tailwind Once this is complete, you’re ready to write your first component.
    Creating the basic Heading component
    Let’s start by defining exactly what options we want to provide in our developer experience.
    For this component, we want to let developers choose from any HTML heading level (H1-H6). We also want them to be able to choose a specific font size and font weight — it may seem obvious now, but we don’t want people choosing a specific heading level for the weight and font size, so we separate those concerns.
    Finally, we want to make sure that any additional HTML attributes can be passed through to our component. There are few things worse than having a component and then not being able to do basic functionality later.
    Using Dynamic tags to create the HTML element
    Let’s start by creating a simple component that allows the user to dynamically choose the HTML element they want to use. Create a new component at ./src/components/Heading.astro.
    --- // ./src/component/Heading.astro const { as } = Astro.props; const As = as; --- <As> <slot /> </As> To use a prop as a dynamic element name, we need the variable to start with a capital letter. We can define this as part of our naming convention and make the developer always capitalize this prop in their use, but that feels inconsistent with how most naming works within props. Instead, let’s keep our focus on the DX, and take that burden on for ourselves.
    In order to dynamically register an HTML element in our component, the variable must start with a capital letter. We can convert that in the frontmatter of our component. We then wrap all the children of our component in the <As> component by using Astro’s built-in <slot /> component.
    Now, we can use this component in our index route and render any HTML element we want. Import the component at the top of the file, and then add <h1> and <h2> elements to the route.
    --- // ./src/pages/index.astro import Layout from '../layouts/Layout.astro'; import Heading from '../components/Heading.astro'; --- <Layout> <Heading as="h1">Hello!</Heading> <Heading as="h2">Hello world</Heading> </Layout> This will render them correctly on the page and is a great start.
    Adding more custom props as a developer interface
    Let’s clean up the element choosing by bringing it inline to our props destructuring, and then add in additional props for weight, size, and any additional HTML attributes.
    To start, let’s bring the custom element selector into the destructuring of the Astro.props object. At the same time, let’s set a sensible default so that if a developer forgets to pass this prop, they still will get a heading.
    --- // ./src/component/Heading.astro const { as: As="h2" } = Astro.props; --- <As> <slot /> </As> Next, we’ll get weight and size. Here’s our next design choice for our component system: do we make our developers know the class names they need to use or do we provide a generic set of sizes and do the mapping ourselves? Since we’re building a system, I think it’s important to move away from class names and into a more declarative setup. This will also future-proof our system by allowing us to change out the underlying styling and class system without affecting the DX.
    Not only do we future proof it, but we also are able to get around a limitation of Tailwind by doing this. Tailwind, as it turns out can’t handle dynamically-created class strings, so by mapping them, we solve an immediate issue as well.
    In this case, our sizes will go from small (sm) to six times the size (6xl) and our weights will go from “light” to “bold”.
    Let’s start by adjusting our frontmatter. We need to get these props off the Astro.props object and create a couple objects that we can use to map our interface to the proper class structure.
    --- // ./src/component/Heading.astro const weights = { "bold": "font-bold", "semibold": "font-semibold", "medium": "font-medium", "light": "font-light" } const sizes= { "6xl": "text-6xl", "5xl": "text-5xl", "4xl": "text-4xl", "3xl": "text-3xl", "2xl": "text-2xl", "xl": "text-xl", "lg": "text-lg", "md": "text-md", "sm": "text-sm" } const { as: As="h2", weight="medium", size="2xl" } = Astro.props; --- Depending on your use case, this amount of sizes and weights might be overkill. The great thing about crafting your own component system is that you get to choose and the only limitations are the ones you set for yourself.
    From here, we can then set the classes on our component. While we could add them in a standard class attribute, I find using Astro’s built-in class:list directive to be the cleaner way to programmatically set classes in a component like this. The directive takes an array of classes that can be strings, arrays themselves, objects, or variables. In this case, we’ll select the correct size and weight from our map objects in the frontmatter.
    --- // ./src/component/Heading.astro const weights = { bold: "font-bold", semibold: "font-semibold", medium: "font-medium", light: "font-light", }; const sizes = { "6xl": "text-6xl", "5xl": "text-5xl", "4xl": "text-4xl", "3xl": "text-3xl", "2xl": "text-2xl", xl: "text-xl", lg: "text-lg", md: "text-md", sm: "text-sm", }; const { as: As = "h2", weight = "medium", size = "2xl" } = Astro.props; --- <As class:list={[ sizes[size], weights[weight] ]} > <slot /> </As> Your front-end should automatically shift a little in this update. Now your font weight will be slightly thicker and the classes should be applied in your developer tools.
    From here, add the props to your index route, and find the right configuration for your app.
    --- // ./src/pages/index.astro import Layout from '../layouts/Layout.astro'; import Heading from '../components/Heading.astro'; --- <Layout> <Heading as="h1" size="6xl" weight="light">Hello!</Heading> <Heading as="h3" size="xl" weight="bold">Hello world</Heading> </Layout> Our custom props are finished, but currently, we can’t use any default HTML attributes, so let’s fix that.
    Adding HTML attributes to the component
    We don’t know what sorts of attributes our developers will want to add, so let’s make sure they can add any additional ones they need.
    To do that, we can spread any other prop being passed to our component, and then add them to the rendered component.
    --- // ./src/component/Heading.astro const weights = { // etc. }; const sizes = { // etc. }; const { as: As = "h2", weight = "medium", size = "md", ...attrs } = Astro.props; --- <As class:list={[ sizes[size], weights[weight] ]} {...attrs} > <slot /> </As> From here, we can add any arbitrary attributes to our element.
    --- // ./src/pages/index.astro import Layout from '../layouts/Layout.astro'; import Heading from '../components/Heading.astro'; --- <Layout> <Heading id="my-id" as="h1" size="6xl" weight="light">Hello!</Heading> <Heading class="text-blue-500" as="h3" size="xl" weight="bold">Hello world</Heading> </Layout> I’d like to take a moment to truly appreciate one aspect of this code. Our <h1>, we add an id attribute. No big deal. Our <h3>, though, we’re adding an additional class. My original assumption when creating this was that this would conflict with the class:list set in our component. Astro takes that worry away. When the class is passed and added to the component, Astro knows to merge the class prop with the class:list directive and automatically makes it work. One less line of code!
    In many ways, I like to consider these additional attributes as “escape hatches” in our component library. Sure, we want our developers to use our tools exactly as intended, but sometimes, it’s important to add new attributes or push our design system’s boundaries. For this, we allow them to add their own attributes, and it can create a powerful mix.
    It looks done, but are we?
    At this point, if you’re following along, it might feel like we’re done, but we have two issues with our code right now: (1) our component has “red squiggles” in our code editor and (2) our developers can make a BIG mistake if they choose.
    The red squiggles come from type errors in our component. Astro gives us TypeScript and linting by default, and sizes and weights can’t be of type: any. Not a big deal, but concerning depending on your deployment settings.
    The other issue is that our developers don’t have to choose a heading element for their heading. I’m all for escape hatches, but only if they don’t break the accessibility and SEO of my site.
    Imagine, if a developer used this with a div instead of an h1 on the page. What would happen?We don’t have to imagine, make the change and see.
    It looks identical, but now there’s no <h1> element on the page. Our semantic structure is broken, and that’s bad news for many reasons. Let’s use typing to help our developers make the best decisions and know what options are available for each prop.
    Adding types to the component
    To set up our types, first we want to make sure we handle any HTML attributes that come through. Astro, again, has our backs and has the typing we need to make this work. We can import the right HTML attribute types from Astro’s typing package. Import the type and then we can extend that type for our own props. In our example, we’ll select the h1 types, since that should cover most anything we need for our headings.
    Inside the Props interface, we’ll also add our first custom type. We’ll specify that the as prop must be one of a set of strings, instead of just a basic string type. In this case, we want it to be h1–h6 and nothing else.
    --- // ./src/component/Heading.astro import type { HTMLAttributes } from 'astro/types'; interface Props extends HTMLAttributes<'h1'> { as: "h1" | "h2" | "h3" | "h4" | "h5" | "h6"; } //... The rest of the file --- After adding this, you’ll note that in your index route, the <h1> component should now have a red underline for the as="div" property. When you hover over it, it will let you know that the as type does not allow for div and it will show you a list of acceptable strings.
    If you delete the div, you should also now have the ability to see a list of what’s available as you try to add the string.
    While it’s not a big deal for the element selection, knowing what’s available is a much bigger deal to the rest of the props, since those are much more custom.
    Let’s extend the custom typing to show all the available options. We also denote these items as optional by using the ?:before defining the type.
    While we could define each of these with the same type functionality as our as type, that doesn’t keep this future proofed. If we add a new size or weight, we’d have to make sure to update our type. To solve this, we can use a fun trick in TypeScript: keyof typeof.
    There are two helper functions in TypeScript that will help us convert our weights and sizes object maps into string literal types:
    typeof: This helper takes an object and converts it to a type. For instance typeof weights would return type { bold: string, semibold: string, ...etc} keyof: This helper function takes a type and returns a list of string literals from that type’s keys. For instance keyof type { bold: string, semibold: string, ...etc} would return "bold" | "semibold" | ...etc which is exactly what we want for both weights and sizes. --- // ./src/component/Heading.astro import type { HTMLAttributes } from 'astro/types'; interface Props extends HTMLAttributes<'h1'> { as: "h1" | "h2" | "h3" | "h4" | "h5" | "h6"; weight?: keyof typeof weights; size?: keyof typeof sizes; } // ... The rest of the file Now, when we want to add a size or weight, we get a dropdown list in our code editor showing exactly what’s available on the type. If something is selected, outside the list, it will show an error in the code editor helping the developer know what they missed.
    While none of this is necessary in the creation of Astro components, the fact that it’s built in and there’s no additional tooling to set up means that using it is very easy to opt into.
    I’m by no means a TypeScript expert, but getting this set up for each component takes only a few additional minutes and can save a lot of time for developers down the line (not to mention, it makes onboarding developers to your system much easier).

    Crafting Strong DX With Astro Components and TypeScript originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  19. Blogger
    by: Chris Coyier
    Mon, 31 Mar 2025 15:25:36 +0000

    New CSS features help us in all sorts of different ways, but here we’re going to look at them when they power a specific type of component, or make a type of component newly possible with less or no JavaScript.
    A single element CSS donut timer/countdown timer by Ben Frain — The surely least-used gradient type, conic-gradient() is used here to make donut (I’d call them charts) which when animated behave like a timer. This kind of thing changes the web in that we don’t need to reach for weightier or more complex technology to do something like this, which is actually visually pretty simple. Sliding 3D Image Frames In CSS by Temani Afif — This one isn’t rife with new CSS features, but that almost makes it more mind blowing to me. In the HTML is only an <img> but the end result is a sliding-door on a 3D box that slides up to reveal the photo. This requires multiple backgrounds including a conic-gradient, a box-shadow and very exotic clip-path, not to mention a transition for the movement. ⭐️ Carousel Configurator by the Chrome Gang — This one is wild. It only works in Google Chrome Canary because of experimental features. Scrolling snapping is there of course, and that’s neat and fundamental to carousels, but the other three features are, as I said, wild. (1) a ::scroll-button which appends, apparently, a fully interactive button that advances scroll by one page. (2) a ::scroll-marker and group pseudo element which are apparently a replacement for a scrollbar and are instead fully interactive stated buttons that represent how many pages a scrolling container has. (3) an interactivity: inert; declaration which you can apply within an animation-timeline such that off-screen parts of the carousel are not interactive. All this seems extremely experimental but I’m here for it. Hide a header when scrolling down, show it again when scrolling up by Bramus Van Damme — With scroll-driven animations, you can “detect” if a page is being scrolled up or down, and in this case set the value of custom properties based on that information. Then with style() queries, set other styles, like hiding or showing a header. The big trick here is persisting the styles even when not scrolling, which involves an infinite transition-delay. This is the magic that keeps the header hidden until you scroll back up. Center Items in First Row with CSS Grid by Ryan Mulligan — When you’re using CSS Grid, for the most part, you set up grid lines and place items exactly along those grid lines. That’s why it’s weird to see “staggered” looking layouts, which is what it looks like when one row of items doesn’t line up exactly with the one below it. But if you just make twice as many columns as you need and offset by one when you need to, you’ve got this kind of control. The trick is figuring out when.
  20. Blogger
    by: Lee Meyer
    Mon, 31 Mar 2025 14:59:47 +0000

    A friend DMs Lee Meyer a CodePen by Manuel Schaller containing a pure CSS simulation of one of the world’s earliest arcade games, Pong, with both paddles participating automatically, in an endless loop. The demo reminds Lee of an arcade machine in attract mode awaiting a coin, and the iconic imagery awakens muscle memory from his misspent childhood, causing him to search his pocket in which he finds the token a spooky shopkeeper gave him last year at the CSS tricks stall in the haunted carnival. The token gleams like a power-up in the light of his laptop, which has a slot he never noticed. He feeds the token into the slot, and the CodePen reloads itself. A vertical range input and a life counter appear, allowing him to control the left paddle and play the game in Chrome using a cocktail of modern and experimental CSS features to implement collision detection in CSS animations. He recalls the spooky shopkeeper’s warning that playing with these features has driven some developers to madness, but the shopkeeper’s voice in Lee’s head whispers: “Too late, we are already playing.”
    CSS collision detection: Past and present
    So, maybe the experience of using modern CSS to add collision detection and interactivity to an animation wasn’t as much like a screenplay sponsored by CSS as I depicted in the intro above — but it did feel like magic compared to what Alex Walker had to go through in 2013 to achieve a similar effect. Hilariously, he describes his implementation as “a glittering city of hacks built on the banks of the ol’ Hack River. On the Planet Hack.“ Alex’s version of CSS Pong cleverly combines checkbox hacks, sibling selectors, and :hover, whereas the CodePen below uses style queries to detect collisions. I feel it’s a nice illustration of how far CSS has come, and a testament to increased power and expressiveness of CSS more than a decade later. It shows how much power we get when combining new CSS features — in this case, that includes style queries, animatable custom properties, and animation timelines. The future CSS features of inline conditionals and custom functions might be able to simplify this code more.
    CodePen Embed Fallback Collision detection with style queries
    Interactive CSS animations with elements ricocheting off each other seems more plausible in 2025 and the code is somewhat sensible. While it’s unnecessary to implement Pong in CSS, and the CSS Working Group probably hasn’t been contemplating how to make that particular niche task easier, the increasing flexibility and power of CSS reinforce my suspicion that one day it will be a lifestyle choice whether to achieve any given effect with scripting or CSS.
    The demo is a similar number of lines of CSS to Alex’s 2013 implementation, but it didn’t feel much like a hack. It’s a demo of modern CSS features working together in the way I expected after reading the instruction booklet. Sometimes when reading introductory articles about the new features we are getting in CSS, it’s hard to appreciate how game-changing they are till you see several features working together. As often happens when pushing the boundaries of a technology, we are going to bump up against the current limitations of style queries and animations. But it’s all in good fun, and we’ll learn about these CSS features in more detail than if we had not attempted this crazy experiment.
    It does seem to work, and my 12-year-old and 7-year-old have both playtested it on my phone and laptop, so it gets the “works on Lee’s devices” seal of quality. Also, since Chrome now supports controlling animations using range inputs, we can make our game playable on mobile, unlike the 2013 version, which relied on :hover. Temani Afif provides a great explanation of how and why view progress timelines can be used to style anything based on the value of a range input.
    Using style queries to detect if the paddle hit the ball
    The ball follows a fixed path, and whether the player’s paddle intersects with the ball when it reaches our side is the only input we have into whether it continues its predetermined bouncy loop or the screen flashes red as the life counter goes down till we see the “Game Over” screen with the option to play again.
    This type of interactivity is what game designers call a quick time event. It’s still a game for sure, but five months ago, when I was young and naive, I mused in my article on animation timelines that the animation timeline feature could open the door for advanced games and interactive experiences in CSS. I wrote that a video game is just a “hyper-interactive animation.” Indeed, the above experiment shows that the new features in CSS allow us to respond to user input in sophisticated ways, but the demo also clarifies the difference between the kind of interactivity we can expect from the current incarnation of CSS versus scripting. The above experiment is more like if Pong were a game inside the old-school arcade game Dragon’s Lair, which was one giant quick time event. It only works because there are limited possible outcomes, but they are certainly less limited than what we used to be able to achieve in CSS.
    Since we know collision detection with the paddle is the only opportunity for the user to have a say in what happens next, let’s focus on that implementation. It will require more mental gymnastics than I would like, since container style queries only allow for name-value pairs with the same syntax as feature queries, meaning we can’t use “greater than” or “less than” operators when comparing numeric values like we do with container size queries which follow the same syntax as @media size queries.
    The workaround below allows us to create style queries based on the ball position being in or out of the range of the paddle. If the ball hits our side, then by default, the play field will flash red and temporarily unpause the animation that decrements the life counter (more on that later). But if the ball hits our side and is within range of the paddle, we leave the life-decrementing animation paused, and make the field background green while the ball hits the paddle. Since we don’t have “greater than” or “less than” operators in style queries, we (ab)use the min() function. If the result equals the first argument then that argument is less than or equal to the second; otherwise it’s greater than the second argument. It’s logical but made me wish for better comparison operators in style queries. Nevertheless, I was impressed that style queries allow the collision detection to be fairly readable, if a little more verbose than I would like.
    body { --int-ball-position-x: round(down, var(--ball-position-x)); --min-ball-position-y-and-top-of-paddle: min(var(--ball-position-y) + var(--ball-height), var(--ping-position)); --min-ball-position-y-and-bottom-of-paddle: min(var(--ball-position-y), var(--ping-position) + var(--paddle-height)); } @container style(--int-ball-position-x: var(--ball-left-boundary)) { .screen { --lives-decrement: running; .field { background: red; } } } @container style(--min-ball-position-y-and-top-of-paddle: var(--ping-position)) and style(--min-ball-position-y-and-bottom-of-paddle: var(--ball-position-y)) and style(--int-ball-position-x: var(--ball-left-boundary)) { .screen { --lives-decrement: paused; .field { background: green; } } } Responding to collisions
    Now that we can style our playing field based on whether the paddle hits the ball, we want to decrement the life counter if our paddle misses the ball, and display “Game Over” when we run out of lives. One way to achieve side effects in CSS is by pausing and unpausing keyframe animations that run forwards. These days, we can style things based on custom properties, which we can set in animations. Using this fact, we can take the power of paused animations to another level.
    body { animation: ball 8s infinite linear, lives 80ms forwards steps(4) var(--lives-decrement); --lives-decrement: paused; } .lives::after { content: var(--lives); } @keyframes lives { 0% { --lives: "3"; } 25% { --lives: "2"; } 75% { --lives: "1"; } 100% { --lives: "0"; } } @container style(--int-ball-position-x: var(--ball-left-boundary)) { .screen { --lives-decrement: running; .field { background: red; } } } @container style(--min-ball-position-y-and-top-of-paddle: var(--ping-position)) and style(--min-ball-position-y-and-bottom-of-paddle: var(--ball-position-y)) and style(--int-ball-position-x: 8) { .screen { --lives-decrement: paused; .field { background: green; } } } @container style(--lives: '0') { .field { display: none; } .game-over { display: flex; } } So when the ball hits the wall and isn’t in range of the paddle, the lives-decrementing animation is unpaused long enough to let it complete one step. Once it reaches zero we hide the play field and display the “Game Over” screen. What’s fascinating about this part of the experiment is that it shows that, using style queries, all properties become indirectly possible to control via animations, even when working with non-animatable properties. And this applies to properties that control whether other animations play. This article touches on why play state deliberately isn’t animatable and could be dangerous to animate, but we know what we are doing, right?
    Full disclosure: The play state approach did lead to hidden complexity in the choice of duration of the animations. I knew that if I chose too long a duration for the life-decrementing counter, it might not have time to proceed to the next step while the ball was hitting the wall, but if I chose too short a duration, missing the ball once might cause the player to lose more than one life.
    I made educated guesses of suitable durations for the ball bouncing and life decrementing, and I expected that when working with fixed-duration predictable animations, the life counter would either always work or always fail. I didn’t expect that my first attempt at the implementation intermittently failed to decrement the life counter at the same point in the animation loop. Setting the durations of both these related animations to multiples of eight seems to fix the problem, but why would predetermined animations exhibit unpredictable behavior?
    Forefeit the game before somebody else takes you out of the frame
    I have theories as to why the unpredictability of the collision detection seemed to be fixed by setting the ball animation to eight seconds and the lives animation to 80 milliseconds. Again, pushing CSS to its limits forces us to think deeper about how it’s working.
    CSS appears to suffer from timer drift, meaning if you set a keyframes animation to last for one second, it will sometimes take slightly under or over one second. When there is a different rate of change between the ball-bouncing and life-losing, it would make sense that the potential discrepancy between the two would be pronounced and lead to unpredictable collision detection. When the rate of change in both animations is the same, they would suffer about equally from timer drift, meaning the frames still synchronize predictably. Or at least I’m hoping the chance they don’t becomes negligible. Alex’s 2013 version of Pong uses translate3d() to move the ball even though it only moves in 2D. Alex recommends this whenever possible “for efficient animation rendering, offloading processing to the GPU for smoother visual effects.” Doing this may have been an alternative fix if it leads to more precise animation timing. There are tradeoffs so I wasn’t willing to go down that rabbit hole of trying to tune the animation performance in this article — but it could be an interesting focus for future research into CSS collision detection. Maybe style queries take a varying amount of time to kick in, leading to some form of a race condition. It is possible that making the ball-bouncing animation slower made this problem less likely. Maybe the bug remains lurking in the shadows somewhere. What did I expect from a hack I achieved using a magic token from a spooky shopkeeper? Haven’t I seen any eighties movie ever? Outro
    You finish reading the article, and feel sure that the author’s rationale for his supposed fix for the bug is hogwash. Clearly, Lee has been driven insane by the allure of overpowering new CSS features, whereas you respect the power of CSS, but you also respect its limitations. You sit down to spend a few minutes with the collision detection CodePen to prove it is still broken, but then find other flaws in the collision detection, and you commence work on a fork that will be superior. Hey, speaking of timer drift, how is it suddenly 1 a.m.? Only a crazy person would stay up that late playing with CSS when they have to work the next day. “Madness,” repeats the spooky shopkeeper inside your head, and his laughter echoes somewhere in the night.
    Roll the credits
    This looping Pong CSS animation by Manuel Schaller gave me an amazing basis for adding the collision detection. His twitching paddle animations help give the illusion of playing against a computer opponent, so forking his CodePen let me focus on implementing the collision detection rather than reinventing Pong.
    This author is grateful to the junior testing team, comprised of his seven-year-old and twelve-year-old, who declared the CSS Pong implementation “pretty cool.” They also suggested the green and red flashes to signal collisions and misses.
    The intro and outro for this article were sponsored by the spooky shopkeeper who sells dangerous CSS tricks. He also sells frozen yoghurt, which he calls froghurt.
    Worlds Collide: Keyframe Collision Detection Using Style Queries originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  21. Blogger

    ChatPDF

    by: aiparabellum.com
    Mon, 31 Mar 2025 09:26:18 +0000

    Welcome to the revolutionary world of ChatPDF, a cutting-edge tool that transforms the way you interact with any PDF document. Designed to cater to students, researchers, and professionals alike, ChatPDF leverages powerful AI technology to make understanding and navigating through complex PDFs as easy as having a conversation. Whether it’s a challenging academic paper, a detailed legal contract, or a technical manual, ChatPDF allows you to ask questions directly and get answers instantly.
    Features
    ChatPDF boasts a range of features that make it an indispensable tool for anyone dealing with PDF documents:
    AI-Powered Understanding: Simply upload your PDF and start chatting with it as you would with a human expert. Multi-File Chats: Organize your PDFs into folders and interact with multiple documents in a single conversation for comparative analysis and more comprehensive insights. Cited Sources: Every answer provided by ChatPDF includes references to the source location in the original document, ensuring you can verify and explore the context further. Any Language Capability: ChatPDF supports PDFs in any language, making it a globally accessible tool that breaks down language barriers. User-Friendly Interface: Easy to navigate interface allows for straightforward uploading and interaction with documents. How It Works
    Engaging with ChatPDF is a straightforward process:
    Upload Your PDF: Drag and drop your PDF file into the ChatPDF platform. Ask Questions: Once your PDF is uploaded, simply type in your questions to start the conversation. Receive Answers: ChatPDF quickly analyzes the content and provides you with answers and summaries directly from the text. Benefits
    Using ChatPDF provides numerous advantages:
    Enhanced Productivity: Saves time by providing quick answers to specific questions without the need for manual searching. Improved Research Efficiency: Ideal for researchers and students who need to extract information and gather insights from lengthy documents. Accessibility: Makes information more accessible for non-native speakers and professionals dealing with foreign language documents. Educational Support: Assists students in studying for exams and understanding complex topics effortlessly. Professional Aid: Helps professionals in understanding and navigating through dense and intricate documents like contracts and reports. Pricing
    ChatPDF offers its services under a freemium model:
    Free Version: Users can enjoy basic features without any cost, suitable for casual or light users. Premium Version: Advanced features and capabilities are available for a subscription fee, details of which can be found directly on the ChatPDF website. Review
    ChatPDF has received widespread acclaim for its innovative approach to handling PDF documents. Users around the globe praise the tool for its ability to simplify complex information and make educational and professional materials more accessible. The ease of use and the ability to interact with documents in any language are frequently highlighted as some of the app’s strongest points.
    Conclusion
    In conclusion, ChatPDF represents a significant advancement in how individuals interact with PDF documents. By combining AI technology with a user-friendly interface, it offers a unique solution that enhances learning, research, and professional activities. Whether you are a student, a researcher, or a professional, ChatPDF provides a powerful tool to streamline your workflow and improve your understanding of complex documents.
    The post ChatPDF appeared first on AI Parabellum • Your Go-To AI Tools Directory for Success.
  22. Blogger
    by: aiparabellum.com
    Sat, 29 Mar 2025 15:52:47 +0000

    The last few decades have witnessed a remarkable evolution in the field of computing power. From the early days of room-sized computers with minimal processing capabilities to the modern era of pocket-sized devices with immense computational power, the progress has been exponential. This exponential growth in computing power is often attributed to Moore’s Law, a principle that has shaped the technology industry for over five decades. Understanding Moore’s Law is crucial in comprehending the rapid advancements in computing and predicting the future of this ever-evolving field.
    Understanding Moore’s Law: Definition and Origin
    Moore’s Law, formulated by Gordon Moore, one of the founders of Intel, is a principle that states the number of transistors on a microchip doubles approximately every two years, while the cost is halved. This law has been a driving force behind the rapid advancement of technology, as it sets the pace for the development of faster, smaller, and more efficient electronic devices. Moore initially observed this trend in 1965, and his prediction has held remarkably true for several decades, guiding the industry in its pursuit of ever-increasing computing power.
    The Mathematical Equation Behind Moore’s Law
    While Moore’s Law is often discussed in terms of its observations and implications, there is also a mathematical equation that underlies this phenomenon. The equation that describes Moore’s Law can be expressed as follows: N = N₀ * 2^(t/T), where N represents the number of transistors on a microchip at a given time, N₀ is the initial number of transistors, t is the time elapsed, and T is the doubling time. This equation demonstrates the exponential growth of transistors on a chip, as the number of transistors doubles every T years. It provides a quantitative understanding of the rapid advancement of computing power and allows for predictions about future technological developments.
    An Exploration of Technology Scaling and Transistors
    To comprehend the implications of Moore’s Law, it is essential to delve into the concepts of technology scaling and transistors. Technology scaling refers to the process of shrinking the size of transistors on a microchip, allowing for more transistors to be packed into the same space. This scaling leads to increased computational power and improved performance, as smaller transistors enable faster switching speeds and reduced power consumption. Transistors, the fundamental building blocks of electronic devices, are responsible for controlling the flow of electrical current within a circuit. As the number of transistors increases, more complex computations can be performed, leading to enhanced processing capabilities and the ability to handle more data. The continuous advancement in the scaling of transistors has been a crucial factor in the exponential growth of computing power.
    Implications of Moore’s Law on Computing Industry
    The impact of Moore’s Law on the computing industry cannot be overstated. It has provided a roadmap for technological progress, shaping the strategies and investments of companies in the development of new products. The doubling of transistors every two years has led to the creation of smaller and more powerful electronic devices, such as smartphones, laptops, and high-performance computing systems. This increased computing power has revolutionized various sectors, including healthcare, finance, education, and entertainment, enabling the development of innovative applications and solutions. Moore’s Law has also driven competition among technology companies, as they strive to stay ahead by constantly improving their products and pushing the boundaries of computing power.
    Challenges to the Continuation of Moore’s Law
    Despite its remarkable track record, Moore’s Law is facing challenges that threaten its continuation. One of the major obstacles is the physical limitations of semiconductor technology. As transistors become increasingly small, quantum effects and other physical phenomena start to affect their performance. Additionally, the cost of research and development required to keep up with Moore’s Law is escalating, making it more difficult for companies to invest in new technologies. The limitations of traditional silicon-based technology and the increasing complexity of chip manufacturing pose significant hurdles to sustaining the historical rate of progress. Overcoming these challenges will require innovations in materials, manufacturing techniques, and alternative computing architectures.
    Alternatives to Moore’s Law: Post-Moore Computing
    As the limitations of Moore’s Law become more apparent, researchers are exploring alternative approaches to continue the trend of increasing computing power. Post-Moore Computing encompasses a range of technologies and concepts that aim to overcome the physical limitations of traditional transistor scaling. This includes innovations such as new materials like graphene and carbon nanotubes, novel computing architectures like neuromorphic and quantum computing, and advancements in software optimization techniques. These alternative paths offer the potential for continued progress in computing power beyond the limitations of Moore’s Law. While these technologies are still in their early stages, they hold the promise of ushering in a new era of computing and enabling further advancements in various fields.
    The Future of Computing Power
    The future of computing power is both exciting and uncertain. While the challenges to sustaining Moore’s Law are significant, the industry is continuously pushing the boundaries of technology to find new solutions. Whether through advancements in traditional semiconductor technology or the adoption of post-Moore computing paradigms, the quest for greater computing power will likely persist. The evolution of computing power has transformed the world we live in, and it will continue to shape our lives in ways we cannot yet fully comprehend. As we embark on this journey into the future, one thing is certain: the law of computing power will remain a driving force behind technological progress for years to come.
    The post The Law of Computing Power: Moore’s Law appeared first on AI Parabellum • Your Go-To AI Tools Directory for Success.
  23. Blogger
    by: Frederik Dohr
    Fri, 28 Mar 2025 15:04:24 +0000

    Comparing visual artifacts can be a powerful, if fickle, approach to automated testing. Playwright makes this seem simple for websites, but the details might take a little finessing.
    Recent downtime prompted me to scratch an itch that had been plaguing me for a while: The style sheet of a website I maintain has grown just a little unwieldy as we’ve been adding code while exploring new features. Now that we have a better idea of the requirements, it’s time for internal CSS refactoring to pay down some of our technical debt, taking advantage of modern CSS features (like using CSS nesting for more obvious structure). More importantly, a cleaner foundation should make it easier to introduce that dark mode feature we’re sorely lacking so we can finally respect users’ preferred color scheme.
    However, being of the apprehensive persuasion, I was reluctant to make large changes for fear of unwittingly introducing bugs. I needed something to guard against visual regressions while refactoring — except that means snapshot testing, which is notoriously slow and brittle.
    In this context, snapshot testing means taking screenshots to establish a reliable baseline against which we can compare future results. As we’ll see, those artifacts are influenced by a multitude of factors that might not always be fully controllable (e.g. timing, variable hardware resources, or randomized content). We also have to maintain state between test runs, i.e. save those screenshots, which complicates the setup and means our test code alone doesn’t fully describe expectations.
    Having procrastinated without a more agreeable solution revealing itself, I finally set out to create what I assumed would be a quick spike. After all, this wouldn’t be part of the regular test suite; just a one-off utility for this particular refactoring task.
    Fortunately, I had vague recollections of past research and quickly rediscovered Playwright’s built-in visual comparison feature. Because I try to select dependencies carefully, I was glad to see that Playwright seems not to rely on many external packages.
    Setup
    The recommended setup with npm init playwright@latest does a decent job, but my minimalist taste had me set everything up from scratch instead. This do-it-yourself approach also helped me understand how the different pieces fit together.
    Given that I expect snapshot testing to only be used on rare occasions, I wanted to isolate everything in a dedicated subdirectory, called test/visual; that will be our working directory from here on out. We’ll start with package.json to declare our dependencies, adding a few helper scripts (spoiler!) while we’re at it:
    { "scripts": { "test": "playwright test", "report": "playwright show-report", "update": "playwright test --update-snapshots", "reset": "rm -r ./playwright-report ./test-results ./viz.test.js-snapshots || true" }, "devDependencies": { "@playwright/test": "^1.49.1" } } If you don’t want node_modules hidden in some subdirectory but also don’t want to burden the root project with this rarely-used dependency, you might resort to manually invoking npm install --no-save @playwright/test in the root directory when needed.
    With that in place, npm install downloads Playwright. Afterwards, npx playwright install downloads a range of headless browsers. (We’ll use npm here, but you might prefer a different package manager and task runner.)
    We define our test environment via playwright.config.js with about a dozen basic Playwright settings:
    import { defineConfig, devices } from "@playwright/test"; let BROWSERS = ["Desktop Firefox", "Desktop Chrome", "Desktop Safari"]; let BASE_URL = "http://localhost:8000"; let SERVER = "cd ../../dist && python3 -m http.server"; let IS_CI = !!process.env.CI; export default defineConfig({ testDir: "./", fullyParallel: true, forbidOnly: IS_CI, retries: 2, workers: IS_CI ? 1 : undefined, reporter: "html", webServer: { command: SERVER, url: BASE_URL, reuseExistingServer: !IS_CI }, use: { baseURL: BASE_URL, trace: "on-first-retry" }, projects: BROWSERS.map(ua => ({ name: ua.toLowerCase().replaceAll(" ", "-"), use: { ...devices[ua] } })) }); Here we expect our static website to already reside within the root directory’s dist folder and to be served at localhost:8000 (see SERVER; I prefer Python there because it’s widely available). I’ve included multiple browsers for illustration purposes. Still, we might reduce that number to speed things up (thus our simple BROWSERS list, which we then map to Playwright’s more elaborate projects data structure). Similarly, continuous integration is YAGNI for my particular scenario, so that whole IS_CI dance could be discarded.
    Capture and compare
    Let’s turn to the actual tests, starting with a minimal sample.test.js file:
    import { test, expect } from "@playwright/test"; test("home page", async ({ page }) => { await page.goto("/"); await expect(page).toHaveScreenshot(); }); npm test executes this little test suite (based on file-name conventions). The initial run always fails because it first needs to create baseline snapshots against which subsequent runs compare their results. Invoking npm test once more should report a passing test.
    Changing our site, e.g. by recklessly messing with build artifacts in dist, should make the test fail again. Such failures will offer various options to compare expected and actual visuals:
    We can also inspect those baseline snapshots directly: Playwright creates a folder for screenshots named after the test file (sample.test.js-snapshots in this case), with file names derived from the respective test’s title (e.g. home-page-desktop-firefox.png).
    Generating tests
    Getting back to our original motivation, what we want is a test for every page. Instead of arduously writing and maintaining repetitive tests, we’ll create a simple web crawler for our website and have tests generated automatically; one for each URL we’ve identified.
    Playwright’s global setup enables us to perform preparatory work before test discovery begins: Determine those URLs and write them to a file. Afterward, we can dynamically generate our tests at runtime.
    While there are other ways to pass data between the setup and test-discovery phases, having a file on disk makes it easy to modify the list of URLs before test runs (e.g. temporarily ignoring irrelevant pages).
    Site map
    The first step is to extend playwright.config.js by inserting globalSetup and exporting two of our configuration values:
    export let BROWSERS = ["Desktop Firefox", "Desktop Chrome", "Desktop Safari"]; export let BASE_URL = "http://localhost:8000"; // etc. export default defineConfig({ // etc. globalSetup: require.resolve("./setup.js") }); Although we’re using ES modules here, we can still rely on CommonJS-specific APIs like require.resolve and __dirname. It appears there’s some Babel transpilation happening in the background, so what’s actually being executed is probably CommonJS? Such nuances sometimes confuse me because it isn’t always obvious what’s being executed where.
    We can now reuse those exported values within a newly created setup.js, which spins up a headless browser to crawl our site (just because that’s easier here than using a separate HTML parser):
    import { BASE_URL, BROWSERS } from "./playwright.config.js"; import { createSiteMap, readSiteMap } from "./sitemap.js"; import playwright from "@playwright/test"; export default async function globalSetup(config) { // only create site map if it doesn't already exist try { readSiteMap(); return; } catch(err) {} // launch browser and initiate crawler let browser = playwright.devices[BROWSERS[0]].defaultBrowserType; browser = await playwright[browser].launch(); let page = await browser.newPage(); await createSiteMap(BASE_URL, page); await browser.close(); } This is fairly boring glue code; the actual crawling is happening within sitemap.js:
    createSiteMap determines URLs and writes them to disk. readSiteMap merely reads any previously created site map from disk. This will be our foundation for dynamically generating tests. (We’ll see later why this needs to be synchronous.) Fortunately, the website in question provides a comprehensive index of all pages, so my crawler only needs to collect unique local URLs from that index page:
    function extractLocalLinks(baseURL) { let urls = new Set(); let offset = baseURL.length; for(let { href } of document.links) { if(href.startsWith(baseURL)) { let path = href.slice(offset); urls.add(path); } } return Array.from(urls); } Wrapping that in a more boring glue code gives us our sitemap.js:
    import { readFileSync, writeFileSync } from "node:fs"; import { join } from "node:path"; let ENTRY_POINT = "/topics"; let SITEMAP = join(__dirname, "./sitemap.json"); export async function createSiteMap(baseURL, page) { await page.goto(baseURL + ENTRY_POINT); let urls = await page.evaluate(extractLocalLinks, baseURL); let data = JSON.stringify(urls, null, 4); writeFileSync(SITEMAP, data, { encoding: "utf-8" }); } export function readSiteMap() { try { var data = readFileSync(SITEMAP, { encoding: "utf-8" }); } catch(err) { if(err.code === "ENOENT") { throw new Error("missing site map"); } throw err; } return JSON.parse(data); } function extractLocalLinks(baseURL) { // etc. } The interesting bit here is that extractLocalLinks is evaluated within the browser context — thus we can rely on DOM APIs, notably document.links — while the rest is executed within the Playwright environment (i.e. Node).
    Tests
    Now that we have our list of URLs, we basically just need a test file with a simple loop to dynamically generate corresponding tests:
    for(let url of readSiteMap()) { test(`page at ${url}`, async ({ page }) => { await page.goto(url); await expect(page).toHaveScreenshot(); }); } This is why readSiteMap had to be synchronous above: Playwright doesn’t currently support top-level await within test files.
    In practice, we’ll want better error reporting for when the site map doesn’t exist yet. Let’s call our actual test file viz.test.js:
    import { readSiteMap } from "./sitemap.js"; import { test, expect } from "@playwright/test"; let sitemap = []; try { sitemap = readSiteMap(); } catch(err) { test("site map", ({ page }) => { throw new Error("missing site map"); }); } for(let url of sitemap) { test(`page at ${url}`, async ({ page }) => { await page.goto(url); await expect(page).toHaveScreenshot(); }); } Getting here was a bit of a journey, but we’re pretty much done… unless we have to deal with reality, which typically takes a bit more tweaking.
    Exceptions
    Because visual testing is inherently flaky, we sometimes need to compensate via special casing. Playwright lets us inject custom CSS, which is often the easiest and most effective approach. Tweaking viz.test.js…
    // etc. import { join } from "node:path"; let OPTIONS = { stylePath: join(__dirname, "./viz.tweaks.css") }; // etc. await expect(page).toHaveScreenshot(OPTIONS); // etc. … allows us to define exceptions in viz.tweaks.css:
    /* suppress state */ main a:visited { color: var(--color-link); } /* suppress randomness */ iframe[src$="/articles/signals-reactivity/demo.html"] { visibility: hidden; } /* suppress flakiness */ body:has(h1 a[href="/wip/unicode-symbols/"]) { main tbody > tr:last-child > td:first-child { font-size: 0; visibility: hidden; } } :has() strikes again!
    Page vs. viewport
    At this point, everything seemed hunky-dory to me, until I realized that my tests didn’t actually fail after I had changed some styling. That’s not good! What I hadn’t taken into account is that .toHaveScreenshot only captures the viewport rather than the entire page. We can rectify that by further extending playwright.config.js.
    export let WIDTH = 800; export let HEIGHT = WIDTH; // etc. projects: BROWSERS.map(ua => ({ name: ua.toLowerCase().replaceAll(" ", "-"), use: { ...devices[ua], viewport: { width: WIDTH, height: HEIGHT } } })) …and then by adjusting viz.test.js‘s test-generating loop:
    import { WIDTH, HEIGHT } from "./playwright.config.js"; // etc. for(let url of sitemap) { test(`page at ${url}`, async ({ page }) => { checkSnapshot(url, page); }); } async function checkSnapshot(url, page) { // determine page height with default viewport await page.setViewportSize({ width: WIDTH, height: HEIGHT }); await page.goto(url); await page.waitForLoadState("networkidle"); let height = await page.evaluate(getFullHeight); // resize viewport for before snapshotting await page.setViewportSize({ width: WIDTH, height: Math.ceil(height) }); await page.waitForLoadState("networkidle"); await expect(page).toHaveScreenshot(OPTIONS); } function getFullHeight() { return document.documentElement.getBoundingClientRect().height; } Note that we’ve also introduced a waiting condition, holding until there’s no network traffic for a while in a crude attempt to account for stuff like lazy-loading images.
    Be aware that capturing the entire page is more resource-intensive and doesn’t always work reliably: You might have to deal with layout shifts or run into timeouts for long or asset-heavy pages. In other words: This risks exacerbating flakiness.
    Conclusion
    So much for that quick spike. While it took more effort than expected (I believe that’s called “software development”), this might actually solve my original problem now (not a common feature of software these days). Of course, shaving this yak still leaves me itchy, as I have yet to do the actual work of scratching CSS without breaking anything. Then comes the real challenge: Retrofitting dark mode to an existing website. I just might need more downtime.
    Automated Visual Regression Testing With Playwright originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  24. Blogger
    by: Abhishek Prakash
    Fri, 28 Mar 2025 18:10:14 +0530

    Welcome to the latest edition of LHB Linux Digest. I don't know if you have noticed but I have changed the newsletter day from Wednesday to Friday so that you can enjoy your Fridays learning something new and discovering some new tool. Enjoy 😄
    Here are the highlights of this edition :
    Creating a .deb package from Python app Quick Vim tip on indentation Pushing Docker image to Hub And more tools, tips and memes for you This edition of LHB Linux Digest newsletter is supported by PikaPods. ❇️ Self-hosting without hassle
    PikaPods allows you to quickly deploy your favorite open source software. All future updates are handled automatically by PikaPods while you enjoy using the software. I use it to self host Umami analytics.
    Oh! You get $5 free credit, so try it out and see if you could rely on PikaPods.
    PikaPods - Instant Open Source App HostingRun the finest Open Source web apps from $1.20/month, fully managed, no tracking, no ads, full privacy. Self-hosting was never this convenient.Instant Open Source App Hosting  
     
      This post is for subscribers only
    Subscribe now Already have an account? Sign in
  25. Blogger
    by: Abhishek Kumar
    Fri, 28 Mar 2025 17:18:28 +0530

    Docker has changed the way we package and distribute applications, but I only truly appreciated its power when I needed to share a project with a friend.
    Initially, we used docker save and docker load to transfer the image, which worked fine but was cumbersome.
    Then, while browsing the Docker documentation, I discovered how easy it was to push images to Docker Hub.
    That was a game-changer! Now, I push my final builds to Docker Hub the moment they're done, allowing my clients and collaborators to pull and run them effortlessly.
    In this guide, I’ll walk you through building, tagging, pushing, and running a Docker image.
    To keep things simple, we’ll create a minimal test image.
    💡If you're new to Docker and want a deep dive, check out our DevOps course, which covers Docker extensively. We’ve covered Docker installation in countless tutorials as well, so we’ll skip that part here and jump straight into writing a Dockerfile.Writing a simple Dockerfile
    A Dockerfile defines how to build your image. It contains a series of instructions that tell Docker how to construct the image layer by layer.
    Let’s create a minimal one:
    # Use an official lightweight image FROM alpine:latest # Install a simple utility RUN apk add --no-cache figlet # Set the default command CMD ["/usr/bin/figlet", "Docker is Fun!"]FROM alpine:latest – This sets the base image to Alpine Linux, a minimal and lightweight distribution. RUN apk add --no-cache figlet – Installs the figlet package using Alpine's package manager (apk), with the --no-cache option to keep the image clean. CMD ["/usr/bin/figlet", "Docker is Fun!"] – Specifies the default command that will run when a container is started. Save this file as Dockerfile in an empty directory.
    Building the docker image
    To build the image, navigate to the directory containing the Dockerfile and run:
    docker build -t <cool-image-name> .docker build – The command to build an image. -t cool-image-name – The -t flag assigns a tag (cool-image-name) to the image, making it easier to reference later. . – The dot tells Docker to look for the Dockerfile in the current directory. Once completed, list your images to confirm:
    docker imagesRunning the docker image
    To run the container and see the output:
    docker run <cool-image-name>You should see an ASCII text saying, “Docker is fun!”
    Tagging the Image
    Before pushing to a registry, we need to tag the image with our Docker Hub username:
    docker tag <cool-image-name> your-dockerhub-username/cool-image-name:latestdocker tag – Creates an alias for the image. your-dockerhub-username/cool-image-name:latest – This follows the format username/repository-name:tag. The latest tag is used as a default version identifier. List images again to see the updated tag:
    docker imagesPushing to Docker Hub
    First, log in to Docker Hub:
    docker login💡If you’re using two-factor authentication, you’ll need to generate an access token from Docker Hub and use that instead of your password.You will be prompted to enter your Docker Hub username and password.
    Once authenticated, you can push the image:
    docker push your-dockerhub-username/cool-image-name:latestAnd that’s it! Your image is now live on Docker Hub.
    Anyone can pull and run it with:
    docker pull your-dockerhub-username/cool-image-name:latest docker run your-dockerhub-username/cool-image-nameFeels great, doesn’t it?
    Alternatives to Docker Hub
    Docker Hub is not the only place to store images. Here are some alternatives:
    GitHub Container Registry (GHCR.io) – If you already use GitHub, this is a great option as it integrates with GitHub Actions and repositories seamlessly. Google Container Registry (GCR.io) – Ideal for those using Google Cloud services. It allows private and public image hosting with tight integration into GCP workloads. Amazon Elastic Container Registry (ECR) – Part of AWS, ECR is an excellent option for those using AWS infrastructure, providing strong security and IAM-based authentication. Self-hosted Docker Registry
    If you need complete control over your images, you can set up your own registry by running:
    docker run -d -p 5000:5000 --name registry registry:2This starts a private registry on port 5000, allowing you to store and retrieve images without relying on external providers.
    You can read more about this in docker's official documentation to host your own docker registry.
    Final thoughts
    Building and pushing Docker images has completely streamlined how I distribute my projects.
    What once felt like a tedious process is now as simple as writing a Dockerfile, tagging an image, and running a single push command.
    No more manual file transfers or complex setup steps, it’s all automated and ready to be pulled anywhere.
    However, Docker Hub's free tier limits private repositories to just one. For personal projects, that’s a bit restrictive, which is why I’m more inclined toward self-hosting my own Docker registry.
    It gives me complete control, avoids limitations, and ensures I don’t have to worry about third-party policies.
    What about you? Which container registry do you use for your projects? Have you considered self-hosting your own registry? Drop your thoughts in the comments.

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.