Jump to content

Blogger

Blog Bot
  • Joined

  • Last visited

    Never

Blog Entries posted by Blogger

  1. by: Bryan Robinson
    Tue, 11 Mar 2025 15:26:10 +0000

    Static sites are wonderful. I’m a big fan.
    They also have their issues. Namely, static sites either are purely static or the frameworks that generate them completely lose out on true static generation when you just dip your toes in the direction of server routes.
    Astro has been watching the front-end ecosystem and is trying to keep one foot firmly embedded in pure static generation, and the other in a powerful set of server-side functionality.
    With Astro Actions, Astro brings a lot of the power of the server to a site that is almost entirely static. A good example of this sort of functionality is dealing with search. If you have a content-based site that can be purely generated, adding search is either going to be something handled entirely on the front end, via a software-as-a-service solution, or, in other frameworks, converting your entire site to a server-side application.
    With Astro, we can generate most of our site during our build, but have a small bit of server-side code that can handle our search functionality using something like Fuse.js.
    In this demo, we’ll use Fuse to search through a set of personal “bookmarks” that are generated at build time, but return proper results from a server call.
    GitHub Live Demo Starting the project
    To get started, we’ll just set up a very basic Astro project. In your terminal, run the following command:
    npm create astro@latest Astro’s adorable mascot Houston is going to ask you a few questions in your terminal. Here are the basic responses, you’ll need:
    Where should we create your new project? Wherever you’d like, but I’ll be calling my directory ./astro-search How would you like to start your new project? Choose the basic minimalist starter. Install dependencies? Yes, please! Initialize a new git repository? I’d recommend it, personally! This will create a directory in the location specified and install everything you need to start an Astro project. Open the directory in your code editor of choice and run npm run dev in your terminal in the directory.
    When you run your project, you’ll see the default Astro project homepage.
    We’re ready to get our project rolling!
    Basic setup
    To get started, let’s remove the default content from the homepage. Open the  /src/pages/index.astro file.
    This is a fairly barebones homepage, but we want it to be even more basic. Remove the <Welcome /> component, and we’ll have a nice blank page.
    For styling, let’s add Tailwind and some very basic markup to the homepage to contain our site.
    npx astro add tailwind The astro add command will install Tailwind and attempt to set up all the boilerplate code for you (handy!). The CLI will ask you if you want it to add the various components, I recommend letting it, but if anything fails, you can copy the code needed from each of the steps in the process. As the last step for getting to work with Tailwind, the CLI will tell you to import the styles into a shared layout. Follow those instructions, and we can get to work.
    Let’s add some very basic markup to our new homepage.
    --- // ./src/pages/index.astro import Layout from '../layouts/Layout.astro'; --- <Layout> <div class="max-w-3xl mx-auto my-10"> <h1 class="text-3xl text-center">My latest bookmarks</h1> <p class="text-xl text-center mb-5">This is only 10 of A LARGE NUMBER THAT WE'LL CHANGE LATER</p> </div> </Layout> Your site should now look like this.
    Not exactly winning any awards yet! That’s alright. Let’s get our bookmarks loaded in.
    Adding bookmark data with Astro Content Layer
    Since not everyone runs their own application for bookmarking interesting items, you can borrow my data. Here’s a small subset of my bookmarks, or you can go get 110 items from this link on GitHub. Add this data as a file in your project. I like to group data in a data directory, so my file lives in /src/data/bookmarks.json.
    Open code [ { "pageTitle": "Our Favorite Sandwich Bread | King Arthur Baking", "url": "<https://www.kingarthurbaking.com/recipes/our-favorite-sandwich-bread-recipe>", "description": "Classic American sandwich loaf, perfect for French toast and sandwiches.", "id": "007y8pmEOvhwldfT3wx1MW" }, { "pageTitle": "Chris Coyier's discussion of Automatic Social Share Images | CSS-Tricks ", "url": "<https://css-tricks.com/automatic-social-share-images/>", "description": "It's a pretty low-effort thing to get a big fancy link preview on social media. Toss a handful of specific <meta> tags on a URL and you get a big image-title-description thing ", "id": "04CXDvGQo19m0oXERL6bhF" }, { "pageTitle": "Automatic Social Share Images | ryanfiller.com", "url": "<https://www.ryanfiller.com/blog/automatic-social-share-images/>", "description": "Setting up automatic social share images with Puppeteer and Netlify Functions. ", "id": "04CXDvGQo19m0oXERLoC10" }, { "pageTitle": "Emma Wedekind: Foundations of Design Systems / React Boston 2019 - YouTube", "url": "<https://m.youtube.com/watch?v=pXb2jA43A6k>", "description": "Emma Wedekind: Foundations of Design Systems / React Boston 2019 Presented by: Emma Wedekind – LogMeIn Design systems are in the world around us, from street...", "id": "0d56d03e-aba4-4ebd-9db8-644bcc185e33" }, { "pageTitle": "Editorial Design Patterns With CSS Grid And Named Columns — Smashing Magazine", "url": "<https://www.smashingmagazine.com/2019/10/editorial-design-patterns-css-grid-subgrid-naming/>", "description": "By naming lines when setting up our CSS Grid layouts, we can tap into some interesting and useful features of Grid — features that become even more powerful when we introduce subgrids.", "id": "13ac1043-1b7d-4a5b-a3d8-b6f5ec34cf1c" }, { "pageTitle": "Netlify pro tip: Using Split Testing to power private beta releases - DEV Community 👩‍💻👨‍💻", "url": "<https://dev.to/philhawksworth/netlify-pro-tip-using-split-testing-to-power-private-beta-releases-a7l>", "description": "Giving users ways to opt in and out of your private betas. Video and tutorial.", "id": "1fbabbf9-2952-47f2-9005-25af90b0229e" }, { "pageTitle": "Netlify Public Folder, Part I: What? Recreating the Dropbox Public Folder With Netlify | Jim Nielsen’s Weblog", "url": "<https://blog.jim-nielsen.com/2019/netlify-public-folder-part-i-what/>", "id": "2607e651-7b64-4695-8af9-3b9b88d402d5" }, { "pageTitle": "Why Is CSS So Weird? - YouTube", "url": "<https://m.youtube.com/watch?v=aHUtMbJw8iA&feature=youtu.be>", "description": "Love it or hate it, CSS is weird! It doesn't work like most programming languages, and it doesn't work like a design tool either. But CSS is also solving a v...", "id": "2e29aa3b-45b8-4ce4-85b7-fd8bc50daccd" }, { "pageTitle": "Internet world despairs as non-profit .org sold for $$$$ to private equity firm, price caps axed • The Register", "url": "<https://www.theregister.co.uk/2019/11/20/org_registry_sale_shambles/>", "id": "33406b33-c453-44d3-8b18-2d2ae83ee73f" }, { "pageTitle": "Netlify Identity for paid subscriptions - Access Control / Identity - Netlify Community", "url": "<https://community.netlify.com/t/netlify-identity-for-paid-subscriptions/1947/2>", "description": "I want to limit certain functionality on my website to paying users. Now I’m using a payment provider (Mollie) similar to Stripe. My idea was to use the webhook fired by this service to call a Netlify function and give…", "id": "34d6341c-18eb-4744-88e1-cfbf6c1cfa6c" }, { "pageTitle": "SmashingConf Freiburg 2019: Videos And Photos — Smashing Magazine", "url": "<https://www.smashingmagazine.com/2019/10/smashingconf-freiburg-2019/>", "description": "We had a lovely time at SmashingConf Freiburg. This post wraps up the event and also shares the video of all of the Freiburg presentations.", "id": "354cbb34-b24a-47f1-8973-8553ed1d809d" }, { "pageTitle": "Adding Google Calendar to your JAMStack", "url": "<https://www.raymondcamden.com/2019/11/18/adding-google-calendar-to-your-jamstack>", "description": "A look at using Google APIs to add events to your static site.", "id": "361b20c4-75ce-46b3-b6d9-38139e03f2ca" }, { "pageTitle": "How to Contribute to an Open Source Project | CSS-Tricks", "url": "<https://css-tricks.com/how-to-contribute-to-an-open-source-project/>", "description": "The following is going to get slightly opinionated and aims to guide someone on their journey into open source. As a prerequisite, you should have basic", "id": "37300606-af08-4d9a-b5e3-12f64ebbb505" }, { "pageTitle": "Functions | Netlify", "url": "<https://www.netlify.com/docs/functions/>", "description": "Netlify builds, deploys, and hosts your front end. Learn how to get started, see examples, and view documentation for the modern web platform.", "id": "3bf9e31b-5288-4b3b-89f2-97034603dbf6" }, { "pageTitle": "Serverless Can Help You To Focus - By Simona Cotin", "url": "<https://hackernoon.com/serverless-can-do-that-7nw32mk>", "id": "43b1ee63-c2f8-4e14-8700-1e21c2e0a8b1" }, { "pageTitle": "Nuxt, Next, Nest?! My Head Hurts. - DEV Community 👩‍💻👨‍💻", "url": "<https://dev.to/laurieontech/nuxt-next-nest-my-head-hurts-5h98>", "description": "I clearly know what all of these things are. Their names are not at all similar. But let's review, just to make sure we know...", "id": "456b7d6d-7efa-408a-9eca-0325d996b69c" }, { "pageTitle": "Consuming a headless CMS GraphQL API with Eleventy - Webstoemp", "url": "<https://www.webstoemp.com/blog/headless-cms-graphql-api-eleventy/>", "description": "With Eleventy, consuming data coming from a GraphQL API to generate static pages is as easy as using Markdown files.", "id": "4606b168-21a6-49df-8536-a2a00750d659" }, ] Now that the data is in the project, we need for Astro to incorporate the data into its build process. To do this, we can use Astro’s new(ish) Content Layer API. The Content Layer API adds a content configuration file to your src directory that allows you to run and collect any number of content pieces from data in your project or external APIs. Create the file  /src/content.config.ts (the name of this file matters, as this is what Astro is looking for in your project).
    import { defineCollection, z } from "astro:content"; import { file } from 'astro/loaders'; const bookmarks = defineCollection({ schema: z.object({ pageTitle: z.string(), url: z.string(), description: z.string().optional() }), loader: file("src/data/bookmarks.json"), }); export const collections = { bookmarks }; In this file, we import a few helpers from Astro. We can use defineCollection to create the collection, z as Zod, to help define our types, and file is a specific content loader meant to read data files.
    The defineCollection method takes an object as its argument with a required loader and optional schema. The schema will help make our content type-safe and make sure our data is always what we expect it to be. In this case, we’ll define the three data properties each of our bookmarks has. It’s important to define all your data in your schema, otherwise it won’t be available to your templates.
    We provide the loader property with a content loader. In this case, we’ll use the file loader that Astro provides and give it the path to our JSON.
    Finally, we need to export the collections variable as an object containing all the collections that we’ve defined (just bookmarks in our project). You’ll want to restart the local server by re-running npm run dev in your terminal to pick up the new data.
    Using the new bookmarks content collection
    Now that we have data, we can use it in our homepage to show the most recent bookmarks that have been added. To get the data, we need to access the content collection with the getCollection method from astro:content. Add the following code to the frontmatter for ./src/pages/index.astro .
    --- import Layout from '../layouts/Layout.astro'; import { getCollection } from 'astro:content'; const bookmarks = await getCollection('bookmarks'); --- This code imports the getCollection method and uses it to create a new variable that contains the data in our bookmarkscollection. The bookmarks variable is an array of data, as defined by the collection, which we can use to loop through in our template.
    --- import Layout from '../layouts/Layout.astro'; import { getCollection } from 'astro:content'; const bookmarks = await getCollection('bookmarks'); --- <Layout> <div class="max-w-3xl mx-auto my-10"> <h1 class="text-3xl text-center">My latest bookmarks</h1> <p class="text-xl text-center mb-5"> This is only 10 of {bookmarks.length} </p> <h2 class="text-2xl mb-3">Latest bookmarks</h2> <ul class="grid gap-4"> { bookmarks.slice(0, 10).map((item) => ( <li> <a href={item.data?.url} class="block p-6 bg-white border border-gray-200 rounded-lg shadow-sm hover:bg-gray-100 dark:bg-gray-800 dark:border-gray-700 dark:hover:bg-gray-700"> <h3 class="mb-2 text-2xl font-bold tracking-tight text-gray-900 dark:text-white"> {item.data?.pageTitle} </h3> <p class="font-normal text-gray-700 dark:text-gray-400"> {item.data?.description} </p> </a> </li> )) } </ul> </div> </Layout> This should pull the most recent 10 items from the array and display them on the homepage with some Tailwind styles. The main thing to note here is that the data structure has changed a little. The actual data for each item in our array actually resides in the data property of the item. This allows Astro to put additional data on the object without colliding with any details we provide in our database. Your project should now look something like this.
    Now that we have data and display, let’s get to work on our search functionality.
    Building search with actions and vanilla JavaScript
    To start, we’ll want to scaffold out a new Astro component. In our example, we’re going to use vanilla JavaScript, but if you’re familiar with React or other frameworks that Astro supports, you can opt for client Islands to build out your search. The Astro actions will work the same.
    Setting up the component
    We need to make a new component to house a bit of JavaScript and the HTML for the search field and results. Create the component in a ./src/components/Search.astro file.
    <form id="searchForm" class="flex mb-6 items-center max-w-sm mx-auto"> <label for="simple-search" class="sr-only">Search</label> <div class="relative w-full"> <input type="text" id="search" class="bg-gray-50 border border-gray-300 text-gray-900 text-sm rounded-lg focus:ring-blue-500 focus:border-blue-500 block w-full p-2.5 dark:bg-gray-700 dark:border-gray-600 dark:placeholder-gray-400 dark:text-white dark:focus:ring-blue-500 dark:focus:border-blue-500" placeholder="Search Bookmarks" required /> </div> <button type="submit" class="p-2.5 ms-2 text-sm font-medium text-white bg-blue-700 rounded-lg border border-blue-700 hover:bg-blue-800 focus:ring-4 focus:outline-none focus:ring-blue-300 dark:bg-blue-600 dark:hover:bg-blue-700 dark:focus:ring-blue-800"> <svg class="w-4 h-4" aria-hidden="true" xmlns="<http://www.w3.org/2000/svg>" fill="none" viewBox="0 0 20 20"> <path stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="m19 19-4-4m0-7A7 7 0 1 1 1 8a7 7 0 0 1 14 0Z"></path> </svg> <span class="sr-only">Search</span> </button> </form> <div class="grid gap-4 mb-10 hidden" id="results"> <h2 class="text-xl font-bold mb-2">Search Results</h2> </div> <script> const form = document.getElementById("searchForm"); const search = document.getElementById("search"); const results = document.getElementById("results"); form?.addEventListener("submit", async (e) => { e.preventDefault(); console.log("SEARCH WILL HAPPEN"); }); </script> The basic HTML is setting up a search form, input, and results area with IDs that we’ll use in JavaScript. The basic JavaScript finds those elements, and for the form, adds an event listener that fires when the form is submitted. The event listener is where a lot of our magic is going to happen, but for now, a console log will do to make sure everything is set up properly.
    Setting up an Astro Action for search
    In order for Actions to work, we need our project to allow for Astro to work in server or hybrid mode. These modes allow for all or some pages to be rendered in serverless functions instead of pre-generated as HTML during the build. In this project, this will be used for the Action and nothing else, so we’ll opt for hybrid mode.
    To be able to run Astro in this way, we need to add a server integration. Astro has integrations for most of the major cloud providers, as well as a basic Node implementation. I typically host on Netlify, so we’ll install their integration. Much like with Tailwind, we’ll use the CLI to add the package and it will build out the boilerplate we need.
    npx astro add netlify Once this is added, Astro is running in Hybrid mode. Most of our site is pre-generated with HTML, but when the Action gets used, it will run as a serverless function.
    Setting up a very basic search Action
    Next, we need an Astro Action to handle our search functionality. To create the action, we need to create a new file at ./src/actions/index.js. All our Actions live in this file. You can write the code for each one in separate files and import them into this file, but in this example, we only have one Action, and that feels like premature optimization.
    In this file, we’ll set up our search Action. Much like setting up our content collections, we’ll use a method called defineAction and give it a schema and in this case a handler. The schema will validate the data it’s getting from our JavaScript is typed correctly, and the handler will define what happens when the Action runs.
    import { defineAction } from "astro:actions"; import { z } from "astro:schema"; import { getCollection } from "astro:content"; export const server = { search: defineAction({ schema: z.object({ query: z.string(), }), handler: async (query) => { const bookmarks = await getCollection("bookmarks"); const results = await bookmarks.filter((bookmark) => { return bookmark.data.pageTitle.includes(query); }); return results; }, }), }; For our Action, we’ll name it search and expect a schema of an object with a single property named query which is a string. The handler function will get all of our bookmarks from the content collection and use a native JavaScript .filter() method to check if the query is included in any bookmark titles. This basic functionality is ready to test with our front-end.
    Using the Astro Action in the search form event
    When the user submits the form, we need to send the query to our new Action. Instead of figuring out where to send our fetch request, Astro gives us access to all of our server Actions with the actions object in astro:actions. This means that any Action we create is accessible from our client-side JavaScript.
    In our Search component, we can now import our Action directly into the JavaScript and then use the search action when the user submits the form.
    <script> import { actions } from "astro:actions"; const form = document.getElementById("searchForm"); const search = document.getElementById("search"); const results = document.getElementById("results"); form?.addEventListener("submit", async (e) => { e.preventDefault(); results.innerHTML = ""; const query = search.value; const { data, error } = await actions.search(query); if (error) { results.innerHTML = `<p>${error.message}</p>`; return; } // create a div for each search result data.forEach(( item ) => { const div = document.createElement("div"); div.innerHTML = ` <a href="${item.data?.url}" class="block p-6 bg-white border border-gray-200 rounded-lg shadow-sm hover:bg-gray-100 dark:bg-gray-800 dark:border-gray-700 dark:hover:bg-gray-700"> <h3 class="mb-2 text-2xl font-bold tracking-tight text-gray-900 dark:text-white"> ${item.data?.pageTitle} </h3> <p class="font-normal text-gray-700 dark:text-gray-400"> ${item.data?.description} </p> </a>`; // append the div to the results container results.appendChild(div); }); // show the results container results.classList.remove("hidden"); }); </script> When results are returned, we can now get search results!
    Though, they’re highly problematic. This is just a simple JavaScript filter, after all. You can search for “Favorite” and get my favorite bread recipe, but if you search for “favorite” (no caps), you’ll get an error… Not ideal.
    That’s why we should use a package like Fuse.js.
    Adding Fuse.js for fuzzy search
    Fuse.js is a JavaScript package that has utilities to make “fuzzy” search much easier for developers. Fuse will accept a string and based on a number of criteria (and a number of sets of data) provide responses that closely match even when the match isn’t perfect. Depending on the settings, Fuse can match “Favorite”, “favorite”, and even misspellings like “favrite” all to the right results.
    Is Fuse as powerful as something like Algolia or ElasticSearch? No. Is it free and pretty darned good? Absolutely! To get Fuse moving, we need to install it into our project.
    npm install fuse.js From there, we can use it in our Action by importing it in the file and creating a new instance of Fuse based on our bookmarks collection.
    import { defineAction } from "astro:actions"; import { z } from "astro:schema"; import { getCollection } from "astro:content"; import Fuse from "fuse.js"; export const server = { search: defineAction({ schema: z.object({ query: z.string(), }), handler: async (query) => { const bookmarks = await getCollection("bookmarks"); const fuse = new Fuse(bookmarks, { threshold: 0.3, keys: [ { name: "data.pageTitle", weight: 1.0 }, { name: "data.description", weight: 0.7 }, { name: "data.url", weight: 0.3 }, ], }); const results = await fuse.search(query); return results; }, }), }; In this case, we create the Fuse instance with a few options. We give it a threshold value between 0 and 1 to decide how “fuzzy” to make the search. Fuzziness is definitely something that depends on use case and the dataset. In our dataset, I’ve found 0.3 to be a great threshold.
    The keys array allows you to specify which data should be searched. In this case, I want all the data to be searched, but I want to allow for different weighting for each item. The title should be most important, followed by the description, and the URL should be last. This way, I can search for keywords in all these areas.
    Once there’s a new Fuse instance, we run fuse.search(query) to have Fuse check the data, and return an array of results.
    When we run this with our front-end, we find we have one more issue to tackle.
    The structure of the data returned is not quite what it was with our simple JavaScript. Each result now has a refIndex and an item. All our data lives on the item, so we need to destructure the item off of each returned result.
    To do that, adjust the front-end forEach.
    // create a div for each search result data.forEach(({ item }) => { const div = document.createElement("div"); div.innerHTML = ` <a href="${item.data?.url}" class="block p-6 bg-white border border-gray-200 rounded-lg shadow-sm hover:bg-gray-100 dark:bg-gray-800 dark:border-gray-700 dark:hover:bg-gray-700"> <h3 class="mb-2 text-2xl font-bold tracking-tight text-gray-900 dark:text-white"> ${item.data?.pageTitle} </h3> <p class="font-normal text-gray-700 dark:text-gray-400"> ${item.data?.description} </p> </a>`; // append the div to the results container results.appendChild(div); }); Now, we have a fully working search for our bookmarks.
    Next steps
    This just scratches the surface of what you can do with Astro Actions. For instance, we should probably add additional error handling based on the error we get back. You can also experiment with handling this at the page-level and letting there be a Search page where the Action is used as a form action and handles it all as a server request instead of with front-end JavaScript code. You could also refactor the JavaScript from the admittedly low-tech vanilla JS to something a bit more robust with React, Svelte, or Vue.
    One thing is for sure, Astro keeps looking at the front-end landscape and learning from the mistakes and best practices of all the other frameworks. Actions, Content Layer, and more are just the beginning for a truly compelling front-end framework.
    Powering Search With Astro Actions and Fuse.js originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  2. by: Abhishek Prakash
    Tue, 11 Mar 2025 12:50:25 GMT

    In case you didn't know it already, regularly charging the battery to 100% or fully discharging it puts your battery at stress and may lead to poor battery life in long run.
    I am not making claims on my own. This is what the experts and even the computer manufactures tell you.
    As you can see in the official Lenovo video above, continuous full charging and discharging accelerate the deterioration of battery health. They also tell you that the optimum battery charging range is 20-80%.
    Although Lenovo also tells you that battery these days are made to last longer than your computer. Not sure what's their idea of an average computer lifespan, I would prefer to keep the battery life healthy for a longer period and thus extract a good performance from my laptop as long as it lives.
    I mean, it's all about following the best practices, right?
    Now, you could manually plug and unplug the power cord but it won't work if you are connected to a docking station or use a modern monitor to power your laptop.
    What can you do in that case? Well, to control the battery charging on Linux, you have a few options:
    KDE Plasma has this as an in-built feature. That's why KDE is ❤️ GNOME has extensions for this. Typical GNOME thing. There are command line tools to limit battery charging levels. Typical Linux thing 😍 Let's see them one by one.
    📋Please verify which desktop environment you are using and then follow the appropriate method.Limit laptop battery charging in KDE
    If you are using KDE Plasma desktop environment, all you have to do is to open the Settings app and go to Power Management. In the Advanced Power Settings, you'll see the battery levels settings.
    I like that KDE informs the users about reduced battery life due to overcharging. It even sets the charging levels at 50-90% by default.
    Of course, you can change the limit to something like 20-80. Although, I am not a fan of the lower 20% limit and I prefer 40-80% instead.
    That's KDE for you. Always caring for its kusers.
    💡It is possible that the battery charging control feature may need to be enabled from the BIOS. Look for it under power management settings in BIOS.Set battery charging limit in GNOME
    Like most other things, GNOME users can achieve this by using a GNOME extension.
    There is an extension called ThinkPad Battery Threshold for this purpose. Although it mentions ThinkPad everywhere, you don't need to own a Lenovo ThinkPad to use it.
    From what I see, the command it runs should work for most, if not all, laptops from different manufacturers.
    I have a detailed tutorial on using GNOME Extensions, so I won't repeat the steps.
    Use the Extension Manager tool to install ThinkPad Battery Threshold extension.
    Once the extension is enabled, you can find it in the system tray. On the first run, it shows red exclamation mark because it is not enabled yet.
    If you click on the Threshold settings, you will be presented with configuration options.
    Once you have set the desired values, click on apply. Next, you'll have to click Enable thresholds. When you hit that, it will ask for your password.
    At this screen, you can have a partial hint about the command it is going to run it.
    📋From what I experienced, while it does set an upper limit, it didn't set the lower limit for my Asus Zenbook. I'll check it on my Tuxedo laptop later. Meanwhile, if you try it on some other device, do share if it works for the lower charging limit as well.Using command line to set battery charging thresholds
    🚧You must have basic knowledge of the Linux command line. That's because there are many moving parts and variables for this part.Here's the thing. For most laptops, there should be file(s) to control battery charging in /sys/class/power_supply/BAT0/ directory but the file names are not standard. It could be charge_control_end_threshold or charge_stop_threshold or something similar.
    Also, you may have more than one battery. For most laptops, it will be BAT0 that is the main battery but you need to make sure of that.
    Install the upower CLI tool on your distribution and then use this command:
    upower --enumerateIt will show all the power devices present on the system:
    /org/freedesktop/UPower/devices/battery_BAT0 /org/freedesktop/UPower/devices/line_power_AC0 /org/freedesktop/UPower/devices/line_power_ucsi_source_psy_USBC000o001 /org/freedesktop/UPower/devices/line_power_ucsi_source_psy_USBC000o002 /org/freedesktop/UPower/devices/headphones_dev_BC_87_FA_23_77_B2 /org/freedesktop/UPower/devices/DisplayDeviceYou can find the battery name here.
    The next step is to look for the related file in /sys/class/power_supply/BAT0/ directory.
    If you find a file starting with charge, note down its name and then add the threshold value to it.
    In my case, it is /sys/class/power_supply/BAT0/charge_control_end_threshold, so I set an upper threshold of 80 in this way:
    echo 80 | sudo tee /sys/class/power_supply/BAT0/charge_control_end_thresholdYou could also use nano editor to edit the file but using tee command is quicker here.
    💡You can also use tlp for this purpose by editing the /etc/tlp.conf file.Conclusion
    See, if you were getting 10 hours of average battery life on a new laptop, it is normal to expect it to be around 7-8 hours after two years. But if you leave it at full charge all the time, it may come down to 6 hours instead of 7-8 hours. The numbers are for example purpose.
    This 20-80% range is what the industry recommends these days. On my Samsung Galaxy smartphone, there is a "Battery protection" setting to stop charging the device after 80% of the charge.
    I wish a healthy battery life for your laptop 💻
  3. By: Linux.com Editorial Staff
    Mon, 10 Mar 2025 15:30:39 +0000

    Join us for a Complimentary Live Webinar Sponsored by Linux Foundation Education and Arm Education
    March 19, 2025 | 08:00 AM PDT (UTC-7)
    You won’t believe how fast this is! Join us for an insightful webinar on leveraging CPUs for machine learning inference using the recently released, open source KleidiAI library. Discover how KleidiAI’s optimized micro-kernels are already being adopted by popular ML frameworks like PyTorch, enabling developers to achieve amazing inference performance without GPU acceleration. We’ll discuss the key optimizations available in KleidiAI, review real-world use cases, and demonstrate how to get started with ease in a fireside chat format, ensuring you stay ahead in the ML space and harness the full potential of CPUs already in consumer hands. This Linux Foundation Education webinar is supported under the Semiconductor Education Alliance and sponsored by Arm.
    Register Now

    The post Learn how easy it is to leverage CPUs for machine learning with our free webinar appeared first on Linux.com.
  4. Smashing Meets Accessibility

    by: Geoff Graham
    Mon, 10 Mar 2025 15:08:47 +0000

    The videos from Smashing Magazine’s recent event on accessibility were just posted the other day. I was invited to host the panel discussion with the speakers, including a couple of personal heroes of mine, Stéphanie Walter and Sarah Fossheim. But I was just as stoked to meet Kardo Ayoub who shared his deeply personal story as a designer with a major visual impairment.
    I’ll drop the video here:
    I’ll be the first to admit that I had to hold back my emotions as Kardo detailed what led to his impairment, the shock that came of it, and how he has beaten the odds to not only be an effective designer, but a real leader in the industry. It’s well worth watching his full presentation, which is also available on YouTube alongside the full presentations from Stéphanie and Sarah.
    Smashing Meets Accessibility originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  5. by: Abhishek Kumar
    Mon, 10 Mar 2025 11:05:22 GMT

    If you are someone interested in self-hosting, home automation, or just want to tinker with your Raspberry Pi, you have various options to get started.
    But, if you are new, and want something easy to get you up to speed, CasaOS is what you can try.
    CasaOS isn't your ordinary operating system. It is more like a conductor, bringing all your favorite self-hosted applications together under one roof.
    Built around the Docker ecosystem, it simplifies the process of managing various services, apps, and smart devices from a browser-based dashboard.
    CasaOS interface running on ZimaBoardOriginally developed by the makers of ZimaBoard, CasaOS makes the deployment of tools like Jellyfi, Plex, Immich, PhotoPrism a matter of a few clicks.
    ZimaBoard Turned My Dream of Owning a Homelab into RealityGet control of your data by hosting open source software easily with this plug and play homelab device.It's FOSSAbhishek PrakashLet us find out more and explore how CasaOS can help can transform our simple Raspberry Pi into a powerful personal cloud.
    What is CasaOS?
    Think of CasaOS (Casa being "home" in Spanish) as a home for your Raspberry Pi or similar device.
    It sits on top of your existing operating system, like Ubuntu or Raspberry Pi OS, and transforms it into a self-hosting machine.
    CasaOS simplifies the process of installing and managing applications you'd typically run through Docker containers by blending the user-friendliness of docker management platform like Portainer.
    It acts as the interface between you and your applications, providing a sleek, user-friendly dashboard that allows you to control everything from one place.
    You can deploy various applications, including media servers like Jellyfin or file-sharing platforms like Nextcloud, all through its web-based interface.
    Installing CasaOS on Raspberry Pi
    Installing CasaOS on a Raspberry Pi is as easy as running a single bash script. But first, let’s make sure your Raspberry Pi is ready:
    💡Feeling a bit hesitant about running scripts? CasaOS offers a live demo on their website (username: casaos, password: casaos) to familiarize yourself with the interface before taking the plunge.Ensure your Pi’s operating system is up-to-date by running the following commands:
    sudo apt update && sudo apt upgrade -yIf you do not have curl installed already, install it by running:
    sudo apt install curl -yNow, grab the installation script from the official website and run it:
    curl -fsSL https://get.casaos.io | sudo bashAccess the CasaOS web interface
    After the installation completes, you will receive the IP address in the terminal to access CasaOS from your web browser.

    Simply type this address into your browser or if you are unsure type hostname -I on the Raspberry Pi to get your IP, and you will be greeted by the CasaOS welcome screen.

    The initial setup process will guide you through creating an account and getting started with your personal cloud.
    Getting Started
    Once inside, CasaOS welcomes you with a clean, modern interface. You’ll see system stats like CPU usage, memory, and disk space upfront in widget-style panels.
    There’s also a search bar for easy navigation, and at the heart of the dashboard lies the app drawer—your gateway to all installed and available applications.
    CasaOS comes pre-installed with two main apps: Files and the App Store. While the Files app gives you easy access to local storage on your Raspberry Pi, the App Store is where the magic really happens.
    From here, you can install various applications with just a few clicks.
    Exploring the magical app store
    The App Store is one of the main attractions of CasaOS. It offers a curated selection of applications that can be deployed directly on your Pi with minimal effort.
    Here’s how you can install an app:
    Go to the app store
    From the dashboard, click on the App Store icon. Browse or search for an app
    Scroll through the list of available apps or use the search bar to find what you’re looking for. Click install
    Once you find the app you want, simply click on the installation button, and CasaOS will handle the rest. The app will appear in your app drawer once the installation is complete.
    It is that simple.
    💡Container-level settings for the apps can be accessed by right clicking the app icon in the dashboard. It lets you map (docker volume) directories on the disk with the app. For example, if you are using Jellyfin, you should map your media folder in the Jellyfin (container) setting. You should see it in the later sections of this tutorial.Access
    Once you have installed applications in CasaOS, accessing them is straightforward, thanks to its intuitive design.
    All you have to do is click on the Jellyfin icon, and it will automatically open up in a new browser window.
    Each application you install behaves in a similar way, CasaOS takes care of the back-end configurations to make sure the apps are easily accessible through your browser.
    No need to manually input IP addresses or ports, as CasaOS handles that for you.
    For applications like Jellyfin or any self-hosted service, you will likely need to log in with default credentials (which you can and should change after the first use).
    In the case of Jellyfin, the default login credentials were:
    Username: admin Password: admin Of course, CasaOS allows you to customize these credentials when setting up the app initially, and it's always a good idea to use something more secure.
    My experience with CasaOS
    For this article, I installed a few applications on CasaOS tailored to my homelab needs:
    A Jellyfin server for media streaming Transmission as a torrent client File Browser to easily interact with files through the browser Cloudflared for tunneling with Cloudflare Nextcloud to set up my cloud A custom Docker stack for hosting a WordPress site. I spent a full week testing these services in my daily routine and jotted down some key takeaways, both good and bad.
    While CasaOS offers a smooth experience overall, there are some quirks that require you to have Docker knowledge to work with them.
    💡I faced a few issues that were caused by mounting external drives and binding them to the CasaOS apps. I solved them by automounting an external disk.Jellyfin media server: Extra drive mount issue
    When I first set up Jellyfin on day one, it worked well right out of the box. However, things got tricky once I added an extra drive for my media library.
    I spent a good chunk of time managing permissions and binding volumes, which was definitely not beginner-friendly.
    For someone new to Docker or CasaOS, the concept of binding volumes can be perplexing. You don’t just plug in the drive and expect it to work, it requires configuring how your media files will link to the Jellyfin container.
    You need to edit the fstab file if you want it to mount at the exact same location every timeEven after jumping through those hoops, it wasn’t smooth sailing. One evening, I accidentally turned off the Raspberry Pi.
    When it booted back up, the additional drive wasn’t mounted automatically, and I had to go through the whole setup process again ☹️
    So while Jellyfin works, managing external drives in CasaOS feels like it could be a headache for new users.
    Cloudflared connection drops
    I used Cloudflare Tunnel to access the services from outside the home network.
    It was a bit of a mixed bag. For the most part, it worked fine, but there were brief periods where the connection was not working even if said that it was connected.
    The connection would just drop unexpectedly, and I’d have to fiddle around with it to get things running again.
    After doing some digging, I found out that the CLI tool for Cloudflare Tunnels had recently been updated, so that might’ve been the root of the issue.
    Hopefully, it was a temporary glitch, but it is something to keep in mind if you rely on stable connections.
    Transmission torrent Client: Jellyfin's Story Repeats
    💡The default username & password is casaos. The tooltip for some applications contain such information. You can also edit them and add notes for the application.Transmission was solid for saving files locally, but as soon as I tried adding the extra drive to save files on my media library, I hit the same wall as with Jellyfin.
    The permissions errors cropped up, and again, the auto-mount issue reared its head.
    So, I would say it is fine for local use if you’re sticking to one drive, but if you plan to expand your storage, be ready for some trial and error.
    Nextcloud: Good enough but not perfect
    Setting up a basic Nextcloud instance in CasaOS was surprisingly easy. It was a matter of clicking the install button, and within a few moments, I had my personal cloud up and running.
    However, if you’re like me and care about how your data is organized and stored, there are a few things you’ll want to keep in mind.
    When you first access your Nextcloud instance, it defaults to using SQLite as the database, which is fine for simple, small-scale setups.
    But if you’re serious about storing larger files or managing multiple users, you’ll quickly realize that SQLite isn’t the best option. Nextcloud itself warns you that it’s not ideal for handling larger loads, and I would highly recommend setting up a proper MySQL or MariaDB database instead.
    Doing so will give you more stability and performance in the long run, especially as your data grows.
    Beyond the database choice, I found that even after using the default setup, Nextcloud’s health checks flagged several issues.
    For example, it complained about the lack of an HTTPS connection, which is crucial for secure file transfers.
    If you want your Nextcloud instance to be properly configured and secure, you'll need to invest some time to set up things like:
    Setting up secure SSL certificate Optimizing your database Handling other backend details that aren’t obvious to a new user. So while Nextcloud is easy to get running initially, fine-tuning it for real-world use takes a bit of extra work, especially if you are focused on data integrity and security.
    Custom WordPress stack: Good stuff!
    Now, coming to the WordPress stack I manually added, this is where CasaOS pleasantly surprised me.
    While I still prefer using Portainer to manage my custom Docker stacks, I have to admit that CasaOS has put in great effort to make the process intuitive.
    It is clear they’ve thought about users who want to deploy their own stacks using Docker Compose files or Docker commands.
    Adding the stack was simple, and the CasaOS interface made it relatively easy to navigate.
    Final thoughts
    After using CasaOS for several days, I can confidently say it’s a tool with immense potential. The ease of deploying apps like Jellyfin and Nextcloud makes it a breeze for users who want a no-hassle, self-hosted solution.
    However, CasaOS is not perfect yet. The app store, while growing, feels limited, and those looking for a more customizable experience may find the lack of advanced Docker controls frustrating at first.
    Learn Docker: Complete Beginner’s CourseLearn Docker, an important skill to have for any DevOps and modern sysadmin. Learn all the essentials of Docker in this series.Linux HandbookAbdullah TarekThat said, CasaOS succeeds in making Docker and self-hosting more accessible to the masses.
    For homelab enthusiasts like me, it is a great middle ground between the complexity of Docker CLI and the bloated nature of full-blown home automation systems.
    Whether you are a newcomer or a seasoned tinker, CasaOS is worth checking out, if you are not afraid to deal with a few bumps along the way.
  6. by: Tatiana P
    Mon, 10 Mar 2025 07:27:07 +0000

    When things get overwhelming, I take a step back – whether by going for a walk or doing some different activity. It gives my mind some breathing space and helps me tackle challenges more effectively. 
    About me
    I am Mala Devi Selvarathinam. I am currently working as an Azure Cloud Consultant at Eficode. My role is fascinating because there are new challenges and new things to do every day. This keeps my work exciting.
    From India to Finland
    I completed my bachelor’s in India in 2014 and I worked as a GRC analyst for an MNC company for five years. I was leading a team, but I felt like I was not very interested in the field. I had always been fascinated about Cloud technologies, and I knew I had to make a switch. That’s when I started applying for a scholarship in Erasmus Mundus to study in Europe for my Master’s degree.
    When I got selected, I was super happy and packed my bags to Finland in 2019. It was a double program degree. I studied at Aalto University in Finland for my first year and my second year at the Technical University of Denmark. Little did I know, the journey ahead would test me in ways I hadn’t imagined.
    Mala Devi Selvarathinam, Azure Cloud Consultant, Eficode
    Overcoming challenges
    I started my Master’s in a new country in 2019, and just a few months later, the pandemic hit the world. This was one of the most stressful periods because everything around me was shutting down. It was also difficult because summer internships were challenging to find at the time and no one knew what was happening. But Finland and Aalto University were very supportive. I took the initiative to join one of Aalto’s research teams as a research assistant, which gave me insight into the research industry and kept me going through uncertain times.
    In 2021, I completed my Master’s degree, but the world was still feeling the aftershocks of the pandemic. Starting over in a new country was daunting—especially when I found myself back at square one, working as a trainee at KPMG. The thought of beginning as a trainee again after years of experience in India was intimidating.
    However, KPMG turned out to be a great learning experience. I smoothly transitioned into consulting in Finland, climbing from trainee to junior consultant and then to senior consultant. I learned how consultation works in Finland and was also embracing the work culture which was very different to how it is back home. Leaving KPMG was bittersweet, but I wanted to dive deeper into Cloud and DevOps-related aspects, and Eficode matched what I wanted to do.
    How did I find Eficode?
    I’ve been aware of Eficode since my Master’s days at Aalto University. I have been following their work, subscribed to their newsletters, and admired their expertise in DevOps and Cloud—areas that aligned perfectly with my career aspirations.
    So, when I spotted an opening, I didn’t hesitate to apply. Six months ago, I officially became part of Eficode, and it has been an exciting journey ever since!
    Working at Eficode
    One key thing that I find exceptional at Eficode is that I’m not afraid to ask anyone any questions. There are no silly questions. That is nice, and it makes your work easier because you don’t get stuck with anything. You know you can always reach out for some help. We have a welcoming and helpful culture. The company’s hybrid work model offers flexibility, allowing me to balance work and life seamlessly.
    As a consultant, I work with multiple clients, and the requirements are different for each client. My day involves prioritizing which client has urgent demands and working on those things. No day is the same here, which keeps the job exciting. The ever-changing nature of the job keeps things fresh and engaging—I’m constantly learning, adapting, and growing.
    My motivation to join the IT field
    In the early 2000s, computers were part of my high school lab. This was a prestigious place to get access to. Seeing a machine complete tasks that we once did manually was mind-blowing. This intrigued me quite a lot. But what truly inspired me was my uncle who worked in IT—he always had answers to my questions, all thanks to the internet!
    So, after high school, when I had to choose my specialization, it was a natural choice to do Computer Science. From playing video games to exploring programming, my curiosity only grew stronger, leading me to where I am today. I was fascinated as a kid, and I still am.
    Tips to overcome challenges
    When things get overwhelming, I take a step back – whether by going for a walk or doing some different activity. It gives my mind some breathing space and helps me tackle challenges more effectively. I tackle things one at a time and see where that leads. Second, I remind myself of my end goal, and why I am here doing these things. Keeping the bigger picture in mind puts everything into perspective.
    About the impact of AI
    AI is an incredible tool—it has the potential to handle mundane, repetitive tasks, freeing up our time for more meaningful work. But like any powerful tool, it needs to be used wisely. Striking the right balance is key.
    I have been experimenting myself with AI tools and it’s amazing to see what they are capable of. It is going to be exciting to see what the future has in store and how this will change the ways of working in IT.
    Skills in IT
    There are two primary skills you must have in the field of IT: the first one, and probably the most important, is being adaptive. Technology evolves rapidly. Adapting to changes is essential because what’s relevant today might be obsolete tomorrow. Staying informed is essential.
    While it’s good to have a broad understanding, specializing in one area gives you a strong foundation. Once you’ve mastered one domain, it’s easier to branch into another. And this further improves your expertise.
    My life outside work
    Outside of work, I’m passionate about art—I enjoy calligraphy, painting, and knitting. Reading is another big part of my life; I make it a point to read at least a couple of books a month, preferably fiction.
    Cooking became a necessity when I moved to Finland, but over time, I fell in love with it. Now, I’m always experimenting with new recipes alongside my husband.
    And of course, I’m a huge fan of animated movies! My life mottos—”Just keep swimming” (from Finding Nemo) and “Keep moving forward” (from Walt Disney)—keep me motivated no matter what challenges come my way.
    The post Role model blog: Mala Devi Selvarathinam, Eficode first appeared on Women in Tech Finland.
  7. Email Spam Checker AI

    by: aiparabellum.com
    Mon, 10 Mar 2025 05:58:30 +0000


    Email Spam Checker is an intuitive online tool that helps users determine whether their emails might trigger spam filters. By analyzing various elements of an email, including content, formatting, and header information, this tool provides a comprehensive assessment of an email’s likelihood of being marked as spam. Whether you’re a business owner sending marketing emails or an individual concerned about the deliverability of your messages, this tool offers valuable insights to optimize your email communication.
    Features of Email Spam Checker AI
    Real-time Email Analysis – The tool scans your email content instantly, providing immediate feedback on potential spam triggers. Comprehensive Spam Score – Receive a detailed spam score that indicates how likely your email is to be flagged by common spam filters. Content Evaluation – The checker analyzes the text content of your email for spam-triggering words, phrases, and patterns. Header Analysis – The tool examines email headers for potential issues that might affect deliverability. HTML Structure Check – For HTML emails, the tool checks the code structure for potential red flags. Improvement Suggestions – Receive actionable recommendations to improve your email’s deliverability. User-friendly Interface – The intuitive design makes it easy for users of all technical levels to check their emails. Privacy-focused – Your email content is analyzed securely without storing sensitive information. How It Works
    Copy and Paste Your Email Content – Simply copy the content of your email, including the subject line and body, and paste it into the provided text field. Include Headers (Optional) – For a more thorough analysis, you can also include the email headers. Click ‘Check for Spam’ – Once you’ve entered your email content, click the button to initiate the analysis. Review the Analysis Results – The tool will process your email and provide a detailed report on potential spam triggers. Implement Suggested Changes – Use the recommendations provided to modify your email content and improve its deliverability. Re-check If Necessary – After making changes, you can run the check again to see if your spam score has improved. Benefits of Email Spam Checker AI
    Improved Email Deliverability – By identifying and addressing potential spam triggers, you can increase the chances of your emails reaching the intended recipients. Time and Resource Savings – Avoid the frustration and wasted resources associated with emails being filtered out before reaching recipients. Enhanced Sender Reputation – Consistently sending non-spammy emails helps maintain a positive sender reputation with email service providers. Marketing Campaign Optimization – For businesses, the tool helps optimize marketing emails to ensure they reach customers’ inboxes. Real-time Feedback – Get immediate insights into potential issues with your email content before sending. Educational Value – Learn about common spam triggers and best practices for email composition. Reduced Risk of Blacklisting – By avoiding spam-like behavior, reduce the risk of your domain being blacklisted by email providers. Professional Communication – Ensure your professional communications maintain a high standard of deliverability. Pricing
    Free Basic Check – A limited version allowing users to check a small number of emails per day. Premium Plan – $9.99/month for unlimited email checks and additional analysis features. Business Plan – $24.99/month including API access and bulk email checking capabilities. Enterprise Solutions – Custom pricing for organizations with specific needs and high-volume requirements. Annual Discount – Save 20% when subscribing to annual plans instead of monthly billing. 14-Day Free Trial – Available for Premium and Business plans to test all features before committing. Review
    After thorough testing, AI Para Bellum’s Email Spam Checker proves to be a reliable and efficient tool for anyone concerned about email deliverability. The interface is clean and straightforward, making it accessible even for users with limited technical knowledge. The analysis is comprehensive, covering various aspects that might trigger spam filters, from specific keywords to HTML structure.
    The detailed reports provide clear insights into potential issues, and the suggested improvements are practical and easy to implement. Business users will particularly appreciate the bulk checking capabilities available in higher-tier plans, allowing for the analysis of entire email campaigns efficiently.
    One notable strength is the tool’s ability to keep up with evolving spam detection algorithms used by major email providers. This ensures that the recommendations remain relevant in the constantly changing landscape of email filtering.
    While the free version offers limited functionality, the paid plans provide excellent value for businesses and individuals who rely heavily on email communication. The pricing structure is reasonable considering the potential cost savings from improved email deliverability.
    Conclusion
    In an era where effective email communication is crucial, AI Para Bellum’s Email Spam Checker stands out as an essential tool for ensuring your messages reach their intended recipients. By providing detailed analysis and actionable recommendations, this tool helps users optimize their emails and avoid common spam triggers. Whether you’re a marketing professional managing email campaigns or an individual concerned about important messages being filtered out, this spam checker offers valuable insights to improve deliverability. With its user-friendly interface, comprehensive analysis, and reasonable pricing, AI Para Bellum’s Email Spam Checker is a worthwhile investment for anyone serious about effective email communication.
    Visit Website The post Email Spam Checker AI appeared first on AI Parabellum • Your Go-To AI Tools Directory for Success.
  8. Email Spam Checker AI

    by: aiparabellum.com
    Mon, 10 Mar 2025 05:58:30 +0000

    Email Spam Checker is an intuitive online tool that helps users determine whether their emails might trigger spam filters. By analyzing various elements of an email, including content, formatting, and header information, this tool provides a comprehensive assessment of an email’s likelihood of being marked as spam. Whether you’re a business owner sending marketing emails or an individual concerned about the deliverability of your messages, this tool offers valuable insights to optimize your email communication.
    Features of Email Spam Checker AI
    Real-time Email Analysis – The tool scans your email content instantly, providing immediate feedback on potential spam triggers. Comprehensive Spam Score – Receive a detailed spam score that indicates how likely your email is to be flagged by common spam filters. Content Evaluation – The checker analyzes the text content of your email for spam-triggering words, phrases, and patterns. Header Analysis – The tool examines email headers for potential issues that might affect deliverability. HTML Structure Check – For HTML emails, the tool checks the code structure for potential red flags. Improvement Suggestions – Receive actionable recommendations to improve your email’s deliverability. User-friendly Interface – The intuitive design makes it easy for users of all technical levels to check their emails. Privacy-focused – Your email content is analyzed securely without storing sensitive information. How It Works
    Copy and Paste Your Email Content – Simply copy the content of your email, including the subject line and body, and paste it into the provided text field. Include Headers (Optional) – For a more thorough analysis, you can also include the email headers. Click ‘Check for Spam’ – Once you’ve entered your email content, click the button to initiate the analysis. Review the Analysis Results – The tool will process your email and provide a detailed report on potential spam triggers. Implement Suggested Changes – Use the recommendations provided to modify your email content and improve its deliverability. Re-check If Necessary – After making changes, you can run the check again to see if your spam score has improved. Benefits of Email Spam Checker AI
    Improved Email Deliverability – By identifying and addressing potential spam triggers, you can increase the chances of your emails reaching the intended recipients. Time and Resource Savings – Avoid the frustration and wasted resources associated with emails being filtered out before reaching recipients. Enhanced Sender Reputation – Consistently sending non-spammy emails helps maintain a positive sender reputation with email service providers. Marketing Campaign Optimization – For businesses, the tool helps optimize marketing emails to ensure they reach customers’ inboxes. Real-time Feedback – Get immediate insights into potential issues with your email content before sending. Educational Value – Learn about common spam triggers and best practices for email composition. Reduced Risk of Blacklisting – By avoiding spam-like behavior, reduce the risk of your domain being blacklisted by email providers. Professional Communication – Ensure your professional communications maintain a high standard of deliverability. Pricing
    Free Basic Check – A limited version allowing users to check a small number of emails per day. Premium Plan – $9.99/month for unlimited email checks and additional analysis features. Business Plan – $24.99/month including API access and bulk email checking capabilities. Enterprise Solutions – Custom pricing for organizations with specific needs and high-volume requirements. Annual Discount – Save 20% when subscribing to annual plans instead of monthly billing. 14-Day Free Trial – Available for Premium and Business plans to test all features before committing. Review
    After thorough testing, AI Para Bellum’s Email Spam Checker proves to be a reliable and efficient tool for anyone concerned about email deliverability. The interface is clean and straightforward, making it accessible even for users with limited technical knowledge. The analysis is comprehensive, covering various aspects that might trigger spam filters, from specific keywords to HTML structure.
    The detailed reports provide clear insights into potential issues, and the suggested improvements are practical and easy to implement. Business users will particularly appreciate the bulk checking capabilities available in higher-tier plans, allowing for the analysis of entire email campaigns efficiently.
    One notable strength is the tool’s ability to keep up with evolving spam detection algorithms used by major email providers. This ensures that the recommendations remain relevant in the constantly changing landscape of email filtering.
    While the free version offers limited functionality, the paid plans provide excellent value for businesses and individuals who rely heavily on email communication. The pricing structure is reasonable considering the potential cost savings from improved email deliverability.
    Conclusion
    In an era where effective email communication is crucial, AI Para Bellum’s Email Spam Checker stands out as an essential tool for ensuring your messages reach their intended recipients. By providing detailed analysis and actionable recommendations, this tool helps users optimize their emails and avoid common spam triggers. Whether you’re a marketing professional managing email campaigns or an individual concerned about important messages being filtered out, this spam checker offers valuable insights to improve deliverability. With its user-friendly interface, comprehensive analysis, and reasonable pricing, AI Para Bellum’s Email Spam Checker is a worthwhile investment for anyone serious about effective email communication.
    Visit Website The post Email Spam Checker AI appeared first on AI Parabellum • Your Go-To AI Tools Directory for Success.
  9. by: Community
    Sat, 08 Mar 2025 08:54:21 GMT

    Imagine a scenario, you downloaded a new binary called ls from the internet. The application could be malicious by intention. Binary files are difficult to trust and run over the system. It could lead to a system hijacking attack, sending your sensitive files and clipboard information to the malicious server or interfere with the existing process of your machine.
    Won’t it be great if you’ve the tool to run and test the application within the defined security parameter. Like, we all know, ls command list the files in the current working directory. So, why would it require a network connection to operate? Does it make sense?
    That’s where the tool, Pledge, comes in. Pledge restricts the system calls a program can make. Pledge is natively supported on OpenBSD systems. Although it isn’t officially supported on Linux systems, I’ll show you a cool hack to utilize pledge on your Linux systems.
    🚧As you can see, this is rather an advanced tool for sysadmins, network engineers and people in the network security field. Most desktop Linux users would not need something like this but that does not mean you cannot explore it out of curiosity.What makes this port possible?
    Thanks to the remarkable work done by Justine Tunney. She is the core developer behind the project- Cosmopolitan Libc.
    Cosmopolitan makes it a bridge for compiling a c programs for 7 different platforms (Linux + Mac + Windows + FreeBSD + OpenBSD 7.3 + NetBSD + BIOS) at one go.
    Utilizing Libc Cosmopolitan, she was able to port OpenBSD Pledge to the Linux system. Here's the nice blog done by her.
    📋A quick disclaimer: Just because you can compile a C program for 7 different platforms doesn’t mean you would be able to successfully run on all these platforms. You need to handle program dependencies as well. For instance, Iptables uses Linux sockets, so you can’t expect it to work magically on Windows systems unless you come up with a way to establish Linux socket networking to Windows.Restrict system calls() with Pledge
    You might be surprised to know one single binary can run on 7 different platforms - Windows, Linux, Mac, FreeBSD, OpenBSD, NetBSD and BIOS.
    These binary files are called Actually Portable Executable (APE). You can check out this blog for more information. These binary files have the .com suffix and it’s necessary to work.
    This guide will show how to use pledge.com binary on your Linux system to restrict system calls while launching any binaries or applications.
    Step 1: Download pledge.com
    You can download pledge-1.8.com from the url- http://justine.lol/pledge/pledge-1.8.com.
    You can rename the file pledge-1.8.com to pledge.com.
    Step 2: Make it executable
    Run this command to make it executable.
    chmod +x ./pledge.comStep 3: Add pledge.com to the path
    A quick way to accomplish this is to move the binary in standard /usr/local/bin/ location.
    sudo mv ./pledge.com /usr/local/binStep 4: Run and test
    pledge.com curl http://itsfoss.comI didn’t assign any permission (called promises) to it so it would fail as expected. But it gives us a hint on what system calls are required by the binary ‘curl’ when it is run.
    With this information, you can see if a program is requesting a system call that it should not. For example, a file explorer program asking for dns. Is it normal?
    Curl is a tool that deals with URLs and indeed requires those system calls.
    Let's assign promises using the -p flag. I'll explain what each of these promises does in the next section.
    pledge.com -p 'stdio rpath inet dns tty sendfd recvfd' \ curl -s http://itsfoss.com📋The debug message error:pledge inet for socket is mis-leading. Even a similar open issue exists at the project's GitHub repo. It is evident that after providing these sets of promises "stdio rpath inet dns tty sendfd recvfd" to our curl binary, it works as expected.It’s successfully redirecting to the https version of our website. Let’s try to see, if with the same set of promises, it can talk to https enabled websites or not.
    pledge.com -p 'stdio rpath inet dns tty sendfd recvfd' \ curl -s https://itsfoss.comYeah! It worked.
    A quick glance at promises
    In the above section, we used 7 promises to make our curl request successful. Here’s a quick glimpse into what each promises intended for:
    stdio: Allows reading and writing to standard input/output (like printing to the console). rpath: Allows reading files from the filesystem. inet: Allows network-related operations (for example, connecting to a server). dns: Allows resolving DNS queries. tty: Allows access to the terminal. sendfd: Allow sending file descriptors. recvfd: Allow received file descriptors To know what other promises are supported by the pledge binary, head over to this blog.
    Porting OpenBSD pledge() to LinuxSandboxing for Linux has never been easier.Conclusion
    OpenBSD’s pledge follows the Least Privilege model. It prevents programs from mis-utilizing system resources. Following this security model, the damage done by a malicious application can be quite limited. Although Linux has seccomp and apparmor in its security arsenal, I find pledge more intuitive and easy to use.
    With Actually Portable Executable (APE), Linux users can now enjoy the simplicity of pledge to make their systems more secure. Users can provide more granular control over what processes can do within these environments would add an extra layer of defense.
    Author Info
    Bhuwan Mishra is a Fullstack developer, with Python and Go as his tools of choice. He takes pride in building and securing web applications, APIs, and CI/CD pipelines, as well as tuning servers for optimal performance. He also has passion for working with Kubernetes.
  10. by: Temani Afif
    Fri, 07 Mar 2025 13:14:12 +0000

    In the last article, we created a CSS-only star rating component using the CSS mask and border-image properties, as well as the newly enhanced attr() function. We ended with CSS code that we can easily adjust to create component variations, including a heart rating and volume control.
    This second article will study a different approach that gives us more flexibility. Instead of the border-image trick we used in the first article, we will rely on scroll-driven animations!
    Here is the same star rating component with the new implementation. And since we’re treading in experimental territory, you’ll want to view this in Chrome 115+ while we wait for Safari and Firefox support:
    CodePen Embed Fallback Do you spot the difference between this and the final demo in the first article? This time, I am updating the color of the stars based on how many of them are selected — something we cannot do using the border-image trick!
    I highly recommend you read the first article before jumping into this second part if you missed it, as I will be referring to concepts and techniques that we explored over there.
    One more time: At the time of writing, only Chrome 115+ and Edge 115+ fully support the features we will be using in this article, so please use either one of those as you follow along.
    Why scroll-driven animations?
    You might be wondering why we’re talking about scroll-driven animation when there’s nothing to scroll to in the star rating component. Scrolling? Animation? But we have nothing to scroll or animate! It’s even more confusing when you read the MDN explainer for scroll-driven animations:
    But if you keep reading you will see that we have two types of scroll-based timelines: scroll progress timelines and view progress timelines. In our case, we are going to use the second one; a view progress timeline, and here is how MDN describes it:
    You can check out the CSS-Tricks almanac definition for view-timeline-name while you’re at it for another explanation.
    Things start to make more sense if we consider the thumb element as the subject and the input element as the scroller. After all, the thumb moves within the input area, so its visibility changes. We can track that movement as a percentage of progress and convert it to a value we can use to style the input element. We are essentially going to implement the equivalent of document.querySelector("input").value in JavaScript but with vanilla CSS!
    The implementation
    Now that we have an idea of how this works, let’s see how everything translates into code.
    @property --val { syntax: "<number>"; inherits: true; initial-value: 0; } input[type="range"] { --min: attr(min type(<number>)); --max: attr(max type(<number>)); timeline-scope: --val; animation: --val linear both; animation-timeline: --val; animation-range: entry 100% exit 0%; overflow: hidden; } @keyframes --val { 0% { --val: var(--max) } 100% { --val: var(--min) } } input[type="range"]::thumb { view-timeline: --val inline; } I know, this is a lot of strange syntax! But we will dissect each line and you will see that it’s not all that complex at the end of the day.
    The subject and the scroller
    We start by defining the subject, i.e. the thumb element, and for this we use the view-timeline shorthand property. From the MDN page, we can read:
    I think it’s self-explanatory. The view timeline name is --val and the axis is inline since we’re working along the horizontal x-axis.
    Next, we define the scroller, i.e. the input element, and for this, we use overflow: hidden (or overflow: auto). This part is the easiest but also the one you will forget the most so let me insist on this: don’t forget to define overflow on the scroller!
    I insist on this because your code will work fine without defining overflow, but the values won’t be good. The reason is that the scroller exists but will be defined by the browser (depending on your page structure and your CSS) and most of the time it’s not the one you want. So let me repeat it another time: remember the overflow property!
    The animation
    Next up, we create an animation that animates the --val variable between the input’s min and max values. Like we did in the first article, we are using the newly-enhanced attr() function to get those values. See that? The “animation” part of the scroll-driven animation, an animation we link to the view timeline we defined on the subject using animation-timeline. And to be able to animate a variable we register it using @property.
    Note the use of timeline-scope which is another tricky feature that’s easy to overlook. By default, named view timelines are scoped to the element where they are defined and its descendant. In our case, the input is a parent element of the thumb so it cannot access the named view timeline. To overcome this, we increase the scope using timeline-scope. Again, from MDN:
    Never forget about this! Sometimes everything is correctly defined but nothing is working because you forget about the scope.
    There’s something else you might be wondering:
    To understand this, let’s first take the following example where you can scroll the container horizontally to reveal a red circle inside of it.
    CodePen Embed Fallback Initially, the red circle is hidden on the right side. Once we start scrolling, it appears from the right side, then disappears to the left as you continue scrolling towards the right. We scroll from left to right but our actual movement is from right to left.
    In our case, we don’t have any scrolling since our subject (the thumb) will not overflow the scroller (the input) but the main logic is the same. The starting point is the right side and the ending point is the left side. In other words, the animation starts when the thumb is on the right side (the input’s max value) and will end when it’s on the left side (the input’s min value).
    The animation range
    The last piece of the puzzle is the following important line of code:
    animation-range: entry 100% exit 0%; By default, the animation starts when the subject starts to enter the scroller from the right and ends when the subject has completely exited the scroller from the left. This is not good because, as we said, the thumb will not overflow the scroller, so it will never reach the start and the end of the animation.
    To rectify this we use the animation-range property to make the start of the animation when the subject has completely entered the scroller from the right (entry 100%) and the end of the animation when the subject starts to exit the scroller from the left (exit 0%).
    To summarize, the thumb element will move within input’s area and that movement is used to control the progress of an animation that animates a variable between the input’s min and max attribute values. We have our replacement for document.querySelector("input").value in JavaScript!
    I am deliberately using the same --val everywhere to confuse you a little and push you to try to understand what is going on. We usually use the dashed ident (--) notation to define custom properties (also called CSS variables) that we later call with var(). This is still true but that same notation can be used to name other things as well.
    In our examples we have three different things named --val:
    The variable that is animated and registered using @property. It contains the selected value and is used to style the input. The named view timeline defined by view-timeline and used by animation-timeline. The keyframes named --val and called by animation. Here is the same code written with different names for more clarity:
    @property --val { syntax: "<number>"; inherits: true; initial-value: 0; } input[type="range"] { --min: attr(min type(<number>)); --max: attr(max type(<number>)); timeline-scope: --timeline; animation: value_update linear both; animation-timeline: --timeline; animation-range: entry 100% exit 0%; overflow: hidden; } @keyframes value_update { 0% { --val: var(--max) } 100% { --val: var(--min) } } input[type="range"]::thumb { view-timeline: --timeine inline; } The star rating component
    All that we have done up to now is get the selected value of the input range — which is honestly about 90% of the work we need to do. What remains is some basic styles and code taken from what we made in the first article.
    If we omit the code from the previous section and the code from the previous article here is what we are left with:
    input[type="range"] { background: linear-gradient(90deg, hsl(calc(30 + 4 * var(--val)) 100% 56%) calc(var(--val) * 100% / var(--max)), #7b7b7b 0 ); } input[type="range"]::thumb { opacity: 0; } We make the thumb invisible and we define a gradient on the main element to color in the stars. No surprise here, but the gradient uses the same --val variable that contains the selected value to inform how much is colored in.
    When, for example, you select three stars, the --val variable will equal 3 and the color stop of the first color will equal 3*100%/5 , or 60%, meaning three stars are colored in. That same color is also dynamic as I am using the hsl() function where the first argument (the hue) is a function of --val as well.
    Here is the full demo, which you will want to open in Chrome 115+ at the time I’m writing this:
    CodePen Embed Fallback And guess what? This implementation works with half stars as well without the need to change the CSS. All you have to do is update the input’s attributes to work in half increments. Remember, we’re yanking these values out of HTML into CSS using attr(), which reads the attributes and returns them to us.
    <input type="range" min=".5" step=".5" max="5"> CodePen Embed Fallback That’s it! We have our rating star component that you can easily control by adjusting the attributes.
    So, should I use border-image or a scroll-driven animation?
    If we look past the browser support factor, I consider this version better than the border-image approach we used in the first article. The border-image version is simpler and does the job pretty well, but it’s limited in what it can do. While our goal is to create a star rating component, it’s good to be able to do more and be able to style an input range as you want.
    With scroll-driven animations, we have more flexibility since the idea is to first get the value of the input and then use it to style the element. I know it’s not easy to grasp but don’t worry about that. You will face scroll-driven animations more often in the future and it will become more familiar with time. This example will look easy to you in good time.
    Worth noting, that the code used to get the value is a generic code that you can easily reuse even if you are not going to style the input itself. Getting the value of the input is independent of styling it.
    Here is a demo where I am adding a tooltip to a range slider to show its value:
    CodePen Embed Fallback Many techniques are involved to create that demo and one of them is using scroll-driven animations to get the input value and show it inside the tooltip!
    Here is another demo using the same technique where different range sliders are controlling different variables on the page.
    CodePen Embed Fallback And why not a wavy range slider?
    CodePen Embed Fallback This one is a bit crazy but it illustrates how far we go with styling an input range! So, even if your goal is not to create a star rating component, there are a lot of use cases where such a technique can be really useful.
    Conclusion
    I hope you enjoyed this brief two-part series. In addition to a star rating component made with minimal code, we have explored a lot of cool and modern features, including the attr() function, CSS mask, and scroll-driven animations. It’s still early to adopt all of these features in production because of browser support, but it’s a good time to explore them and see what can be done soon using only CSS.
    Article series
    A CSS-Only Star Rating Component and More! (Part 1) A CSS-Only Star Rating Component and More! (Part 2)
    A CSS-Only Star Rating Component and More! (Part 2) originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  11. by: Geoff Graham
    Thu, 06 Mar 2025 16:33:55 +0000

    Manuel Matuzović:
    This easily qualifies as a “gotcha” in CSS and is a good reminder that the cascade doesn’t know everything all at the same time. If a custom property is invalid, the cascade won’t ignore it, and it gets evaluated, which invalidates the declaration. And if we set an invalid custom property on a shorthand property that combines several constituent properties — like how background and animation are both shorthand for a bunch of other properties — then the entire declaration becomes invalid, including all of the implied constituents. No bueno indeed.
    What to do, then?
    Great advice, Manuel!
    Maybe don’t use custom properties in shorthand properties originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  12. by: Abhishek Prakash
    Thu, 06 Mar 2025 05:27:13 GMT

    Skype is being discontinued by Microsoft on 5th May.
    Once a hallmark of the old internet, Skype was already dying a slow death. It just could not keep up with the competition from WhatsApp, Zoom etc despite Microsoft's backing.
    While there are open source alternatives to Skype, I doubt if friends and family would use them.
    I am not going to miss it, as I haven't used Skype in years. Let's keep it in the museum of Internet history.
    Speaking of the old internet, Digg is making a comeback. 20 years back, it was the 'front page of the internet'.
    💬 Let's see what else you get in this edition
    VLC aiming for the Moon. EA open sourcing its games. GNOME 48 features to expect. And other Linux news, tips, and, of course, memes! This edition of FOSS Weekly is supported by ONLYOFFICE. ✨ ONLYOFFICE PDF Editor: Create, Edit and Collaborate on PDFs on Linux
    The ONLYOFFICE suite now offers an updated PDF editor that comes equipped with collaborative PDF editing and other useful features.
    You can deploy ONLYOFFICE Docs on your Linux server and integrate it with your favourite platform, such as Nextcloud, Moodle and more. Alternatively, you can download the free desktop app for your Linux distro.
    Online PDF editor, reader and converter | ONLYOFFICEView and create PDF files from any text document, spreadsheet or presentation, convert PDF to DOCX online, create fillable PDF forms.ONLYOFFICE📰 Linux and Open Source News
    Electronic Arts has open sourced four Command & Conquer games. VLC is literally reaching for the Moon to mark its 20-year anniversary. Internxt Drive has become the first cloud storage with post-quantum encryption. Electronic Frontier Foundation has launched a new open source tool to detect eavesdropping on cellular networks. GNOME 48 is just around the corner, check out what features are coming:
    Discover What’s New in GNOME 48 With Our Feature Rundown!GNOME 48 is just around the corner. Explore what’s coming with it.It's FOSS NewsSourav Rudra🧠 What We’re Thinking About
    A German startup has published open source plans for its Nuclear Fusion power plant!
    As per the latest desktop market share report, macOS usage has seen a notable dip on Steam.
    🧮 Linux Tips, Tutorials and More
    Get the 'Create new document' option back in the right-click context menu in GNOME. Facing errors while trying to watch a DVD on Fedora? It can be fixed. Learn to record a selected area or an application window with OBS Studio. Knowing how to edit files with Nano text editor, might come in handy while dealing with config files. New users often get confused with so many Ubuntu versions. This article helps clear the doubt.
    Explained: Which Ubuntu Version Should I Use?Confused about Ubuntu vs Xubuntu vs Lubuntu vs Kubuntu?? Want to know which Ubuntu flavor you should use? This beginner’s guide helps you decide which Ubuntu should you choose.It's FOSSAbhishek Prakash👷 Homelab and Maker's Corner
    As a Kodi user, you cannot miss out on installing add-ons and builds. We also have a list of the best add-ons to spice up your media server.
    And you can use virtual keyboard with Raspberry Pi easily.
    Using On-screen Keyboard in Raspberry Pi OSHere’s what you can do to use a virtual keyboard on Raspberry Pi OS.It's FOSSAbhishek Prakash✨ Apps Highlight
    Facing slow downloads on your Android smartphone? Aria2App can help.
    Aria2App is a Super Fast Versatile Open-Source Download Manager for AndroidA useful open-source download manager for AndroidIt's FOSS NewsSourav Rudralichess lets you compete with other players in online games of Chess.
    📽️ Video I am Creating for You
    How much does an active cooler cools down a Raspberry Pi 5? Let's find it out in this quick video.
    Subscribe to It's FOSS YouTube Channel🧩 Quiz Time
    For a change, you can take the text processing command crossword challenge.
    Commands to Work With Text Files: CrosswordSolve this crossword with commands for text processing.It's FOSSAnkush Das💡 Quick Handy Tip
    You can play Lofi music in VLC Media Player. First, switch to the Playlist view in VLC by going into View → Playlist.
    Now, in the sidebar, scroll down and select Icecast Radio Directory. Here, search for Lofi in the search bar.
    Now, double-click on any Lo-fi channel to start playing. On the other hand, if you want to listen to music via the web browser, you can use freeCodeCamp.org Code Radio.
    🤣 Meme of the Week
    You didn't have to join the dark side, Firefox. 🫤
    🗓️ Tech Trivia
    In 1953, MIT's Whirlwind computer showcased an early form of system management software called "Director," developed by Douglas Ross. Demonstrated at a digital fire control symposium, Director automated resource allocation (like memory, storage, and printing), making it one of the earliest examples of an operating system-like program.
    🧑‍🤝‍🧑 FOSSverse Corner
    An important question has been raised by one of our longtime FOSSers.
    Do we all see the same thing on the internet?I think we all assume we are seeing the same content on a website. But do we.? Read this quote from an article on the Australian ABC news “Many people are unaware that the internet they see is unique to them. Even if we surf the same news websites, we’ll see different news stories based on our previous likes. And on a website like Amazon, almost every item and price we see is unique to us. It is chosen by algorithms based on what we were previously wanting to buy and willing to pay. There is…It's FOSS Communitynevj❤️ With love
    Share it with your Linux-using friends and encourage them to subscribe (hint: it's here).
    Share the articles in Linux Subreddits and community forums.
    Follow us on Google News and stay updated in your News feed.
    Opt for It's FOSS Plus membership and support us 🙏
    Enjoy FOSS 😄
  13. by: Sreenath
    Thu, 06 Mar 2025 03:09:13 GMT

    When it comes to screen recording in Linux or any other operating system, OBS Studio becomes he go-to choice.
    It offers all the features baked in for users, ranging from casual screen recorders to advanced streamers.
    One such useful feature is to record a part of the screen in OBS Studio. I'll share the detailed steps for Linux users in this tutorial.
    🚧The method mentioned is based on a Wayland session. Also, this is a personal workflow, and if readers have better options, feel free to comment, so that I can improve the article for everyone.Record an application window in OBS Studio
    Before starting, first click on File → Settings from OBS Studio main menu. Here, in the Settings window, go to the Video section and note the Canvas resolution and Output scale resolution for your system.
    Note Canvas and Output Scale valuesThis will be helpful when you are reverting in a later step.
    Step 1: Create a new source
    First, let's create a new source for our recording. Click on the “+” icon on the OBS Studio home screen as shown in the screenshot below. Select “Screen Capture (Pipewire)” option.
    📋For X11 system, this may be Display Capture (XSHM).Click on "+" to add a new sourceOn the resulting window, give a name to the source and then click OK.
    Give a name to the sourceOnce you press OK, you will be shown a dialog box to select the record area.
    Step 2: Select the window to record
    Here, select the Window option from the top bar.
    Select the window to be recorded.Once you click on the Window option, you will be able to see all the open windows listed. Select a window that you want to record from the list, as shown in the screenshot above.
    This will give you a dialog box, with a preview of the window being recorded.
    Enable the cursor recording (if needed) and click OK.
    Selected window in previewStep 3: Crop the video to window size
    Now, in the main OBS window, you can see that the application you have selected is not filling the full canvas, in my case 1920×1080.
    Empty space in canvasThe output will contain this window and the rest of the canvas in black if you keep recording with this setting.
    You need to crop the area so that only the necessary part is present on the output file.
    For this, right-click on our source and select Resize Output (Source Size) option, as shown below:
    Resize output source sizeClick on Yes, when prompted.
    Accept ConfirmationAs soon as you click Yes, you can see that the canvas is now reduced to the size of the window.
    Canvas ResizedStep 4: Record the video
    You can now start recording the video using the Record button.
    Start video recordingOnce finished, stop recording, and the saved video file won't contain any other part, except the window.
    Step 5: Delete the video source
    Now that you have recorded the video, let's remove this particular source.
    Right-click on the source and select Remove.
    Remove the sourceStep 6: Revert the canvas and output scale
    While we were resizing the canvas to the window, the setting has been also changed on your OBS Studio video settings. If left unchanged, your future videos will also be recorded with the reduced size.
    So, click on File in the OBS Studio main menu and select Settings.
    Click on File → SettingsOn the Settings window, go to Videos and revert the Base Canvas Resolution and Output Scaled Resolution to your preferred normal values. Then click Apply.
    Revert Canvas Size to normalRecord an area on the screen in OBS Studio
    This is the same process as the one described above, except for the area selection.
    Step 1: Create a new source
    Click on the plus button on the Sources section in OBS Studio and select Screen Capture.
    Select Screen CaptureName the source and click OK.
    Step 2: Select a region
    On the area selection dialog box, click on Region. From the section, select Select Region option.
    Select RegionNotice the cursor has now changed to a plus sign. Drag the area you want to record.
    Select Area to RecordYou can see that the preview now has the selected area. Don't forget to enable the cursors, if needed.
    It is normal that the canvas is way too big and your video occupies only a part of it.
    Canvas Size MismatchStep 3: Resize the source
    Like in the previous section, right-click on the source and select Resize output option.
    Resize Output to Area CaptureStep 4: Record and revert the settings
    Start recording the video. Once it is completed, save the recording and remove the source. Revert the canvas and output scale settings, as shown in step 6 of the previous section.
    💬 Hope this guide has helped you record with OBS Studio. Please let me know if this tutorial helped you or if you need further help.
  14. Whereis Command Examples

    by: Abhishek Prakash
    Wed, 05 Mar 2025 20:45:06 +0530

    The whereis command helps users locate the binary, source, and manual page files for a given command. And in this tutorial, I will walk you through practical examples to help you understand how to use whereis command.
    Unlike other search commands like find that scan the entire file system, whereis searches predefined directories, making it faster and more efficient.
    It is particularly useful for system administrators and developers to locate files related to commands without requiring root privileges.
    whereis Command Syntax
    To use any command to its maximum potential, it is important to know its syntax and that is why I'm starting this tutorial by introducing the syntax of the whereis command:
    whereis [OPTIONS] FILE_NAME...Here,
    OPTIONS: Flags that modify the search behavior. FILE_NAME: The name of the file or command to locate. Now, let's take a look at available options of the whereis command:
    Flag Description -b Search only for binary files. -s Search only for source files. -m Search only for manual pages. -u Search for unusual files (files missing one or more of binary, source, or manual). -B Specify directories to search for binary files (must be followed by -f). -S Specify directories to search for source files (must be followed by -f). -M Specify directories to search for manual pages (must be followed by -f). -f Terminate directory lists provided with -B, -S, or -M, signaling the start of file names. -l Display directories that are searched by default. 1. Locate all files related to a command
    To find all files (binary, source, and manual) related to a command, all you have to do is append the command name to the whereis command as shown here:
    whereis commandFor example, if I want to locate all files related to bash, then I would use the following:
    whereis bashHere,
    /usr/bin/bash: Path to the binary file. /usr/share/man/man1/bash.1.gz: Path to the manual page. 2. Search for binary files only
    To locate only the binary (executable) file of a command, use the -b flag along with the target command as shown here:
    whereis -b commandIf I want to search for the binary files for the ls command, then I would use the following:
    whereis -b ls3. Search for the manual page only
    To locate only the manual page for a specific command, use the -m flag along with the targeted command as shown here:
    whereis -m commandFor example, if I want to search for the manual page location for the grep command, then I would use the following:
    whereis -m grepAs you can see, it gave me two locations:
    /usr/share/man/man1/grep.1.gz: A manual page which can be accessed through man grep command. /usr/share/info/grep.info.gz: An info page that can be accessed through info grep command. 4. Search for source files only
    To find only source code files associated with a command, use the -s flag along with the targeted command as shown here:
    whereis -s commandFor example, if I want to search source files for the gcc, then I would use the following:
    whereis -s gccMy system is fresh and I haven't installed any packages from the source so I was given a blank output.
    5. Specify custom directories for searching
    To limit your search to specific directories, use options like -B, -S, or -M. For example, if I want to limit my search to the /bin directory for the cp command, then I would use the following command:
    whereis -b -B /bin -f cpHere,
    -b: This flag tells whereis to search only for binary files (executables), ignoring source and manual files. -B /bin: The -B flag specifies a custom directory (/bin in this case) where whereis should look for binary files. It also limits the search to the /bin directory instead of searching all default directories. -f cp: Without -f, the whereis command would interpret cp as another directory. 6. Identify commands missing certain files (unusual files)
    The whereis command can help you find commands that are missing one or more associated files (binary, source, or manual). This is particularly useful for troubleshooting or verifying file completeness.
    For example, to search for commands in the /bin directory that is missing manual pages, you first have to change your directory to /bin and then use the -u flag to search for unusual files:
    cd /bin whereis -u -m *Wrapping Up...
    This was a quick tutorial on how you can use the whereis command in Linux including practical examples and syntax. I hope you will find this guide helpful.
    If you have any queries or suggestions, leave us a comment.
  15. by: Preethi
    Wed, 05 Mar 2025 13:16:32 +0000

    Grouping selected items is a design choice often employed to help users quickly grasp which items are selected and unselected. For instance, checked-off items move up the list in to-do lists, allowing users to focus on the remaining tasks when they revisit the list.
    We’ll design a UI that follows a similar grouping pattern. Instead of simply rearranging the list of selected items, we’ll also lay them out horizontally using CSS Grid. This further distinguishes between the selected and unselected items.
    We’ll explore two approaches for this. One involves using auto-fill, which is suitable when the selected items don’t exceed the grid container’s boundaries, ensuring a stable layout. In contrast, CSS Grid’s span keyword provides another approach that offers greater flexibility.
    The HTML is the same for both methods:
    <ul> <li> <label> <input type="checkbox" /> <div class=icon>🍱</div> <div class=text>Bento</div> </label> </li> <li> <label> <input type="checkbox" /> <div class=icon>🍡</div> <div class=text>Dangos</div> </label> </li> <!-- more list items --> </ul> The markup consists of an unordered list (<ul>). However, we don’t necessarily have to use <ul> and <li> elements since the layout of the items will be determined by the CSS grid properties. Note that I am using an implicit <label> around the <input> elements mostly as a way to avoid needing an extra wrapper around things, but that explicit labels are generally better supported by assistive technologies.
    Method 1: Using auto-fill
    CodePen Embed Fallback ul { width: 250px; display: grid; gap: 14px 10px; grid-template-columns: repeat(auto-fill, 40px); justify-content: center; /* etc. */ } The <ul> element, which contains the items, has a display: grid style rule, turning it into a grid container. It also has gaps of 14px and 10px between its grid rows and columns. The grid content is justified (inline alignment) to center.
    The grid-template-columns property specifies how column tracks will be sized in the grid. Initially, all items will be in a single column. However, when items are selected, they will be moved to the first row, and each selected item will be in its own column. The key part of this declaration is the auto-fill value.
    The auto-fill value is added where the repeat count goes in the repeat() function. This ensures the columns repeat, with each column’s track sizing being the given size in repeat() (40px in our example), that will fit inside the grid container’s boundaries.
    For now, let’s make sure that the list items are positioned in a single column:
    li { width: inherit; grid-column: 1; /* Equivalent to: grid-column-start: 1; grid-column-end: auto; */ /* etc. */ } When an item is checked, that is when an <li> element :has() a :checked checkbox, we’re selecting that. And when we do, the <li> is given a grid-area that puts it in the first row, and its column will be auto-placed within the grid in the first row as per the value of the grid-template-columns property of the grid container (<ul>). This causes the selected items to group at the top of the list and be arranged horizontally:
    li { width: inherit; grid-column: 1; /* etc. */ &:has(:checked) { grid-area: 1; /* Equivalent to: grid-row-start: 1; grid-column-start: auto; grid-row-end: auto; grid-column-end: auto; */ width: 40px; /* etc. */ } /* etc. */ } And that gives us our final result! Let’s compare that with the second method I want to show you.
    Method 2: Using the span keyword
    CodePen Embed Fallback We won’t be needing the grid-template-columns property now. Here’s the new <ul> style ruleset:
    ul { width: 250px; display: grid; gap: 14px 10px; justify-content: center; justify-items: center; /* etc. */ } The inclusion of justify-items will help with the alignment of grid items as we’ll see in a moment. Here are the updated styles for the <li> element:
    li { width: inherit; grid-column: 1 / span 6; /* Equivalent to: grid-column-start: 1; grid-column-end: span 6; */ /* etc. */ } As before, each item is placed in the first column, but now they also span six column tracks (since there are six items). This ensures that when multiple columns appear in the grid, as items are selected, the following unselected items remain in a single column under the selected items — now the unselected items span across multiple column tracks. The justify-items: center declaration will keep the items aligned to the center.
    li { width: inherit; grid-column: 1 / span 6; /* etc. */ &:has(:checked) { grid-area: 1; width: 120px; /* etc. */ } /* etc. */ } The width of the selected items has been increased from the previous example, so the layout of the selection UI can be viewed for when the selected items overflow the container.
    Selection order
    The order of selected and unselected items will remain the same as the source order. If the on-screen order needs to match the user’s selection, dynamically assign an incremental order value to the items as they are selected.
    onload = ()=>{ let i=1; document.querySelectorAll('input').forEach((input)=>{ input.addEventListener("click", () => { input.parentElement.parentElement.style.order = input.checked ? i++ : (i--, 0); }); }); } CodePen Embed Fallback Wrapping up
    CSS Grid helps make both approaches very flexible without a ton of configuration. By using auto-fill to place items on either axis (rows or columns), the selected items can be easily grouped within the grid container without disturbing the layout of the unselected items in the same container, for as long as the selected items don’t overflow the container.
    If they do overflow the container, using the span approach helps maintain the layout irrespective of how long the group of selected items gets in a given axis. Some design alternatives for the UI are grouping the selected items at the end of the list, or swapping the horizontal and vertical structure.
    Grouping Selection List Items Together With CSS Grid originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  16. QuestionAI

    by: aiparabellum.com
    Wed, 05 Mar 2025 08:08:08 +0000

    QuestionAI represents the next evolution in question-answering technology, bridging the gap between vast information repositories and human curiosity. Unlike traditional search engines that return lists of potentially relevant links, QuestionAI directly delivers concise, accurate answers drawn from analyzing multiple sources. The platform understands natural language queries, recognizes context, and processes information much like a human researcher would—but at machine speed. Designed for professionals, researchers, students, and curious minds alike, QuestionAI serves as an intelligent assistant that not only answers questions but also helps users explore topics more deeply through related insights and explanations.
    Features
    Natural Language Processing: Ask questions in everyday language just as you would to a human expert, without needing to format queries in any special way. Contextual Understanding: The AI maintains conversation context, allowing for follow-up questions without repeating background information. Multi-Source Analysis: Draws information from diverse and verified sources to provide comprehensive answers that consider multiple perspectives. Citation Tracking: Clearly identifies the sources of information, allowing users to verify facts and explore topics more deeply. Customizable Knowledge Base: Ability to connect to specific document sets or databases for specialized questioning in professional environments. Visual Response Options: Receives responses not just as text but also as charts, graphs, and other visualizations when appropriate. Multilingual Support: Ask questions and receive answers in multiple languages, breaking down language barriers to information. Conversation History: Maintains a searchable log of previous questions and answers for easy reference and continued learning. Integration Capabilities: Connects with popular productivity tools, research platforms, and learning management systems. Privacy Controls: Robust security features ensure sensitive questions and information remain confidential. How It Works
    Question Input: Users enter their question in natural language through the clean, intuitive interface. AI Processing: The system analyzes the question to understand intent, context, and required information. Knowledge Base Search: QuestionAI scans its vast knowledge repositories, identifying relevant information from verified sources. Answer Synthesis: Rather than simply retrieving information, the AI synthesizes a coherent, comprehensive answer tailored to the specific question. Source Attribution: The system attributes information to its original sources, allowing users to verify claims and explore further. Refinement Loop: Users can ask follow-up questions or request clarification, with the system maintaining context from the previous exchanges. Continuous Learning: The platform improves over time, learning from interactions to provide increasingly relevant and accurate responses. Benefits
    Time Efficiency: Receive immediate answers to complex questions without sifting through multiple search results or documents. Information Accuracy: Benefit from answers drawn from verified sources with clear citations, reducing the risk of misinformation. Knowledge Depth: Explore topics more thoroughly with an AI that can handle increasingly specific and technical questions. Research Enhancement: Accelerate research processes by quickly establishing foundational knowledge before diving deeper. Learning Support: Supplement educational experiences with explanations that adapt to different knowledge levels. Decision Support: Make more informed decisions by accessing relevant information quickly and in context. Productivity Boost: Reduce time spent searching for information and more time applying insights to tasks and projects. Linguistic Accessibility: Access information regardless of language barriers through multilingual support. Reduced Cognitive Load: Offload the mental effort of searching and synthesizing information to an AI assistant. Continuous Knowledge Access: Stay informed with up-to-date information through a system that regularly updates its knowledge base. Pricing
    Free Tier: Limited daily questions with basic functionality, perfect for casual users exploring the platform. Personal Plan ($19/month): Unlimited questions, full feature access, and priority processing for individual users. Professional Plan ($49/month): Enhanced capabilities including document uploads, team sharing features, and advanced analytics. Enterprise Solution (Custom Pricing): Tailored implementation with private knowledge bases, enhanced security, dedicated support, and custom integrations. Educational Discount: Special pricing for students and educational institutions with additional learning-focused features. Annual Subscription: Save 20% when purchasing an annual subscription for any paid tier. Review
    QuestionAI delivers on its promise to revolutionize information retrieval through artificial intelligence. During our extensive testing, the platform consistently provided relevant, accurate answers to questions ranging from straightforward facts to complex conceptual inquiries.
    The natural language processing capabilities are particularly impressive, accurately interpreting questions even when phrased conversationally or with ambiguous terms. The system rarely misunderstands intent and, when it does, the clarification process is smooth and intuitive.
    Conclusion
    QuestionAI stands at the forefront of a new paradigm in information access, where artificial intelligence serves not just as a tool for finding information but as an active partner in knowledge exploration and synthesis. By combining advanced natural language processing, comprehensive knowledge repositories, and intuitive design, it delivers on the promise of AI as an extension of human cognitive capabilities.
    Visit Website The post QuestionAI appeared first on AI Parabellum • Your Go-To AI Tools Directory for Success.
  17. by: Abhishek Prakash
    Wed, 05 Mar 2025 03:12:16 GMT

    From Kiosk projects to homelab dashboards, there are numerous usage of a touch screen display with Raspberry Pi.
    And it makes total sense to use the on-screen keyboard on the touch device rather than plugging in a keyboard and mouse.
    Thankfully, the latest versions of Raspberry Pi OS provide a simple way to install and use the on-screen keyboard.
    On-screen keyboard on Raspberry PiLet me show how you can install the on-screen keyboard support on Raspberry Pi OS.
    📋I am using the DIY Touchscreen by SunFounder (partner link). It's an interesting display that is also compatible with other SBCs. I'll be doing its full review next week. The steps should also work on other touch screens, too.SunFounder Latest 10 Inch DIY Touch Screen All-In-One Solution for Raspberry Pi 5, IPS HD 1280x800 LCD, Built-In USB-C PD 5.1V/5A Output, HDMI, 10-point, No Driver, Speakers, for RPi 5/4/3/Zero 2WThis SunFounder Touch Screen is a 10-point IPS touch screen in a 10.1″ big size and with a high resolution of 1280x800, bringing you perfect visual experience. It works with various operating systems including Raspberry Pi OS, Ubuntu, Ubuntu Mate, Windows, Android, and Chrome OS.SunFounderSunFounderPartner Link
    Just check if you already have the on-screen keyboard support
    Raspberry Pi OS Bookworm and later versions include the Squeekboard software for the on-screen keyboard feature.
    Now, this package may already be installed by default. If you open a terminal and touch the interface and it brings the keyboard, you have everything set already.
    It is also possible that it is installed but not enabled.
    Go to the menu, then Preferences and open Raspberry Pi config tool. In the display tab, see if you can change the settings for the on-screen keyboard.
    On-screen keyboard support already installed on Raspberry PiIf you tap the on-screen keyboard settings and it says, "A virtual keyboard is not installed", you will have to install the software first. The next section details the steps.
    Virtual Keyboard is not installedGetting on-screen keyboard in Raspberry Pi OS Bookworm
    🚧You'll need a physical keyboard and mouse for installing the required package If you cannot connect one, you could try to SSH into the Pi.Update the package cache of your Raspberry Pi first:
    sudo apt updateThe squeekboard package provides the virtual keyboard in Debian. Install it using the command below:
    sudo apt install squeekboardOnce installed, click on the menu and start Raspberry Pi Configuration from the Preferences.
    Access Raspberry Pi Configuration In the Raspberry Pi Configuration tool, go to the Display tab and touch it.
    You'll see three options:
    Enabled always: The on-screen keyboard will be always accessible through the top panel, whether you are using touchscreen or not. Enabled if touchscreen found: The on-screen keyboard is only accessible when it detects a touchscreen. Disabled: Virtual keyboard won't be accessible at all. Out of these three, you'll be tempted to go for the 'Enabled if touchscreen found'.
    However, it didn't work for me. I opted for Enabled always instead.
    But not all applications will automatically bring up the on-screen keyboard. In my case, Chromium didn't play well. Thankfully, the on-screen keyboard icon at top panel lets you access it at will.
    Virtual keyboard comes up for supported application but it is also accessible from top panelAnd this way, you can enjoy the keyboard on a touchscreen.
    Conclusion
    For older versions of Raspberry Pi OS, you could also go with the matchbox-keyboard package.
    sudo apt install matchbox-keyboardSince Squeekboard is for Wayland, perhaps Matchbox will work on Xorg display server.
    The official documents of SunFounder's Touchscreen mentions that Squeekboard is installed by default in Raspberry Pi OS but that was not the case for me.
    Installing it was matter of one command and then the virtual keyboard was up and running. This is tested on Raspberry Pi OS but since Squeekboard is available for Wayland in general, it might work on other operating systems, too.
    💬 Did it work for you? If yes, a simple 'thank you' will encourage me. If not, please provide the details and I'll try to help you.
  18. MyReport by Alaba AI

    by: aiparabellum.com
    Wed, 05 Mar 2025 02:53:02 +0000

    In today’s data-driven business landscape, making sense of complex information quickly can be the difference between success and stagnation. Enter MyReport by Alaba AI, a cutting-edge tool designed to transform how professionals interact with and extract value from their data. This innovative platform leverages the power of artificial intelligence to streamline report creation, making business intelligence accessible to everyone, regardless of their technical expertise.
    Introduction
    MyReport stands at the intersection of artificial intelligence and business analytics, offering a seamless solution for professionals seeking to make data-driven decisions without the typical technical hurdles. The platform enables users to generate comprehensive, visually appealing reports from raw data in minutes rather than hours. By eliminating the tedious aspects of report creation, MyReport frees up valuable time for analysis and strategic planning, allowing teams to focus on what truly matters: extracting actionable insights from their data.
    Features of MyReport by Alaba AI
    Natural Language Processing: Simply tell the AI what kind of report you need in plain English, and watch as MyReport transforms your request into a structured document. Multi-Format Data Handling: Upload data in various formats including CSV, Excel, PDF, and more without worrying about compatibility issues. Automated Data Visualization: The platform intelligently selects the most appropriate charts, graphs, and visual representations for your data. Customizable Templates: Choose from professionally designed templates or create your own to maintain brand consistency across all reports. Collaborative Workspace: Multiple team members can work on reports simultaneously, leaving comments and suggestions in real-time. AI-Powered Insights: Beyond visualization, MyReport highlights key trends, anomalies, and potential opportunities hidden in your data. One-Click Sharing Options: Distribute finished reports across various channels with ease, including direct links, email, and integration with common workplace tools. Revision History: Track changes and revert to previous versions if needed, ensuring nothing gets lost in the iteration process. How It Works
    Data Upload: Begin by uploading your raw data files to MyReport through a simple drag-and-drop interface. AI Analysis: The system automatically analyzes your data, identifying relationships, patterns, and potential points of interest. Report Generation: Using natural language prompts, tell the AI what aspects of the data you want to highlight in your report. Customization: Fine-tune the generated report by adjusting visualizations, adding commentary, or rearranging sections to better tell your data story. Collaboration: Invite team members to review and contribute to the report before finalization. Export and Share: Download your completed report in multiple formats or share it directly with stakeholders through various channels. Benefits of MyReport by Alaba AI
    Time Efficiency: Reduce report creation time by up to 80% with MyReport, allowing your team to focus on analysis rather than formatting. Accessibility: Democratize data analysis by eliminating the need for specialized technical skills to create professional reports. Consistency: Ensure all company reports follow the same high-quality standards regardless of who creates them. Enhanced Decision Making: Surface insights that might be missed in manual analysis, leading to more informed business decisions. Cost Reduction: Lower the overhead costs associated with dedicated data analysis teams or specialized software training. Scalability: Whether you’re creating one report a month or dozens a day, MyReport scales to meet your needs without performance degradation. Error Reduction: Minimize human errors in data handling and calculation through automated processing. Professional Presentation: Impress clients and stakeholders with polished, visually appealing reports that effectively communicate complex information. Pricing
    Free Tier: Access basic MyReport features with limited monthly reports and standard templates. Perfect for individuals or small teams just getting started. Professional Plan ($29/month): Unlock advanced visualization options, custom branding, and increased report volume. Ideal for growing businesses with regular reporting needs. Business Plan ($79/month): Gain access to all premium features, priority processing, and team collaboration tools. Designed for established companies with multiple departments. Enterprise Solution (Custom Pricing): Tailored packages for large organizations with specific requirements, including dedicated support, custom integrations, and unlimited reporting capacity. Annual Discount: Save 20% on any paid plan when billed annually rather than monthly. Review
    After extensive testing, MyReport by Alaba AI proves itself to be a formidable ally in the quest for data-driven decision making. The platform’s intuitive interface belies its powerful capabilities, making it accessible to novices while still satisfying the demands of experienced data professionals.
    The natural language processing functionality stands out as particularly impressive, accurately interpreting even complex requests with minimal guidance. During our testing, we found that MyReport could handle industry-specific terminology with surprising fluency, suggesting a robust training model behind the scenes.
    Conclusion
    MyReport by Alaba AI emerges as a powerful solution in the evolving landscape of business intelligence tools. By combining sophisticated AI capabilities with user-friendly design, it effectively bridges the gap between raw data and actionable insights. While not without its learning curve, the platform’s ability to dramatically reduce report creation time while improving output quality makes it a compelling choice for organizations of all sizes.
    Visit Website The post MyReport by Alaba AI appeared first on AI Parabellum • Your Go-To AI Tools Directory for Success.
  19. by: Sreenath V
    Tue, 04 Mar 2025 20:23:37 +0530

    Kubernetes is a powerful platform designed to manage and automate the deployment, scaling, and operation of containerized applications. In simple terms, it helps you run and manage your software applications in an organized and efficient way.
    kubectl is the command-line tool that helps you manage your Kubernetes cluster. It allows you to deploy applications, manage resources, and get information about your applications. Simply put, kubectl is the main tool you use to communicate with Kubernetes and get things done.
    In this article, we will explore essential kubectl commands that will make managing your Kubernetes cluster easier and more efficient.
    Essential Kubernetes Concepts
    Before diving into the commands, let's quickly review some key Kubernetes concepts to ensure a solid understanding.
    Pod: The smallest deployable unit in Kubernetes, containing one or more containers that run together on the same node. Node: A physical or virtual machine in the Kubernetes cluster where Pods are deployed. Services: An abstraction that defines a set of Pods and provides a stable network endpoint to access them. Deployment: A controller that manages the desired state and lifecycle of Pods by creating, updating, and deleting them. Namespace: A logical partition in a Kubernetes cluster to isolate and organize resources for different users or teams. General Command Line Options
    This section covers various optional flags and parameters that can be used with different kubectl commands. These options help customize the output format, specify namespaces, filter resources, and more, making it easier to manage and interact with your Kubernetes clusters.
    The get command in kubectl is used to retrieve information about Kubernetes resources. It can list various resources such as pods, services, nodes, and more.
    To retrieve a list of all the pods in your Kubernetes cluster in JSON format,
    kubectl get pods -o json List all the pods in the current namespace and output their details in YAML format.
    kubectl get pods -o yaml Output the details in plain-text format, including the node name for each pod,
    kubectl get pods -o wide List all the pods in a specific namespace using the -n option:
    kubectl get pods -n <namespace_name> To create a Kubernetes resource from a configuration file, us the command:
    kubectl create -f <filename> To filter logs by a specific label, you can use:
    kubectl logs -l <label_filter> For example, to get logs from all pods labeled app=myapp, you would use:
    kubectl logs -l app=myapp For quick command line help, always use the -h option.
    kubectl -h Create and Delete Kubernetes Resources
    In Kubernetes, you can create resources using the kubectl create command, update or apply changes to existing resources with the kubectl apply command, and remove resources with the kubectl delete command. These commands allow you to manage the lifecycle of your Kubernetes resources effectively and efficiently.
    The apply and create are two different approaches to create resources in Kubernetes. While the apply follows a declarative approach, create follows an imperative approach.
    Learn about these different approaches in our dedicated article.
    kubectl apply vs create: What’s the Difference?Two different approaches for creating resources in Kubernetes cluster. What’s the difference?Linux HandbookRakesh JainTo apply a configuration file to a pod, use the command:
    kubectl apply -f <JSON/YAML configuration file> If you have multiple JSON/YAML configuration files, you can use glob pattern matching here:
    kubectl apply -f '*.json' To create a new Kubernetes resource using a configuration file,
    kubectl create -f <configuration file> The -f option can receive directory values or configuration file URL to create resource.
    kubectl create -f <directory> OR kubectl create -f <URL to files> The delete option is used to delete resources by file names, resources and names, or by resources and label selector.
    To delete resources using the type and name specified in the configuration file,
    kubectl delete -f <configuration file> Cluster Management and Context Commands
    Cluster management in Kubernetes refers to the process of querying and managing information about the Kubernetes cluster itself. According to the official documentation, it involves various commands to display endpoint information, view and manage cluster configurations, list API resources and versions, and manage contexts.
    The cluster-info command displays the endpoint information about the master and services in the cluster.
    kubectl cluster-info To print the client and server version information for the current context, use:
    kubectl version To display the merged kubeconfig settings,
    kubectl config view To extract and display the names of all users from the kubeconfig file, you can use a jsonpath expression.
    kubectl config view -o jsonpath='{.users[*].name}' Display the current context that kubectl is using,
    kubectl config current-context You can display a list of contexts with the get-context option.
    kubectl config get-contexts To set the default context, use:
    kubectl config use-context <context-name> Print the supported API resources on the server.
    kubectl api-resources It includes core resources like pods, services, and nodes, as well as custom resources defined by users or installed by operators.
    You can use the api-versions command to print the supported API versions on the server in the form of "group/version". This command helps you identify which API versions are available and supported by your Kubernetes cluster.
    kubectl api-versions The --all-namespaces option available with the get command can be used to list the requested object(s) across all namespaces. For example, to list all pods existing in all namespaces,
    kubectl get pods --all-namespaces Daemonsets
    A DaemonSet in Kubernetes ensures that all (or some) Nodes run a copy of a specified Pod, providing essential node-local facilities like logging, monitoring, or networking services. As nodes are added or removed from the cluster, DaemonSets automatically add or remove Pods accordingly. They are particularly useful for running background tasks on every node and ensuring node-level functionality throughout the cluster.
    You can create a new DaemonSet with the command:
    kubectl create daemonset <daemonset_name> To list one or more DaemonSets, use the command:
    kubectl get daemonset The command,
    kubectl edit daemonset <daemonset_name> will open up the specified DaemonSet in the default editor so you can edit and update the definition.
    To delete a daemonset,
    kubectl delete daemonset <daemonset_name> You can check the rollout status of a daemonset with the kubectl rollout command:
    kubectl rollout status daemonset The command below provides detailed information about the specified DaemonSet in the given namespace:
    kubectl describe ds <daemonset_name> -n <namespace_name> Deployments
    Kubernetes deployments are essential for managing and scaling applications. They ensure that the desired number of application instances are running at all times, making it easy to roll out updates, perform rollbacks, and maintain the overall health of your application by automatically replacing failed instances.
    In other words, Deployment allows you to manage updates for Pods and ReplicaSets in a declarative manner. By specifying the desired state in the Deployment configuration, the Deployment Controller adjusts the actual state to match at a controlled pace. You can use Deployments to create new ReplicaSets or replace existing ones while adopting their resources seamlessly. For more details, refer to StatefulSet vs. Deployment.
    To list one or more deployments:
    kubectl get deployment To display detailed information about the specified deployment, including its configuration, events, and status,
    kubectl describe deployment <deployment_name> The below command opens the specified deployment configuration in the default editor, allowing you to make changes to its configuration:
    kubectl edit deployment <deployment_name> To create a deployment using kubectl, specify the image to use for the deployment:
    kubectl create deployment <deployment_name> --image=<image_name> You can delete a specified deployment and all of its associated resources, such as Pods and ReplicaSets by using the command:
    kubectl delete deployment <deployment_name> To check the rollout status of the specified deployment and providing information about the progress of the deployment's update process,
    kubectl rollout status deployment <deployment_name> Perform a rolling update in Kubernetes by setting the container image to a new version for a specific deployment.
    kubectl set image deployment/<deployment name> <container name>=image:<new image version> To roll back the specified deployment to the previous revision (undo),
    kubectl rollout undo deployment/<deployment name> The command below will forcefully replace a resource from a configuration file:
    kubectl replace --force -f <configuration file> Retrieving and Filtering Events
    In Kubernetes, events are a crucial component for monitoring and diagnosing the state of your cluster. They provide real-time information about changes and actions happening within the system, such as pod creation, scaling operations, errors, and warnings.
    Use the command:
    kubectl get events To retrieve and list recent events for all resources in the system, providing valuable information about what has happened in your cluster.
    To filter and list only the events of type "Warning," thereby providing insights into any potential issues or warnings in your cluster,
    kubectl get events --field-selector type=Warning You can retrieve and list events sorted by their creation timestamp. This allows you to view events in chronological order.
    kubectl get events --sort-by=.metadata.creationTimestamp To lists events, excluding those related to Pod events,
    kubectl get events --field-selector involvedObject.kind!=Pod This helps you focus on events for other types of resources.
    To list events specifically for a node with the given name,
    kubectl get events --field-selector involvedObject.kind=Node, involvedObject.name=<node_name> You can filter events, excluding those that are of the "Normal" type, allowing you to focus on warning and error events that may require attention:
    kubectl get events --field-selector type!=Normal Managing Logs
    Logs are essential for understanding the real-time behavior and performance of your applications. They provide a record of activity and outputs generated by containers and pods, which can be invaluable for debugging and monitoring purposes.
    To print the logs for the specified pod:
    kubectl logs <pod_name> To print the logs for the specified pod since last hour:
    kubectl logs --since=1h <pod_name> You can read the most recent 50 lines of logs for the specified pod using the --tail option.
    kubectl logs --tail=50 <pod_name> The command below streams and continuously displays the logs of the specified pod, optionally filtered by the specified container:
    kubectl logs -f <pod_name> [-c <container_name>] For example, as per the official documentation,
    kubectl logs -f -c ruby web-1 Begin streaming the logs of the ruby container in pod web-1.
    To continuously display the logs of the specified pod in real-time,
    kubectl logs -f <pod_name> You can fetch the logs up to the current point in time for a specific container within the specified pod using the command:
    kubectl logs -c <container_name> <pod_name> To save the logs for the specified pod to a file,
    kubectl logs <pod_name> > pod.log To print the logs for the previous instance of the specified pod:
    kubectl logs --previous <pod_name> This is particularly useful for troubleshooting and analyzing logs from a previously failed pod.
    Namespaces
    In Kubernetes, namespaces are used to divide and organize resources within a cluster, creating separate environments for different teams, projects, or applications. This helps in managing resources, access permissions, and ensuring that each group or application operates independently and securely.
    To create a new namespace with the specified name in your Kubernetes cluster:
    kubectl create namespace <namespace_name> To list all namespaces in your Kubernetes cluster, use the command:
    kubectl get namespaces You can get a detailed description of the specified namespace, including its status, resource quotas using the command:
    kubectl describe namespace <namespace_name> To delete the specified namespace along with all the resources contained within it:
    kubectl delete namespace <namespace_name> The command
    kubectl edit namespace <namespace_name> opens the default editor on your machine with the configuration of the specified namespace, allowing you to make changes directly.
    To display resource usage (CPU and memory) for all pods within a specific namespace, you can use the following command:
    kubectl top pods --namespace=<namespace_name> Nodes
    In Kubernetes, nodes are the fundamental building blocks of the cluster, serving as the physical or virtual machines that run your applications and services.
    To update the taints on one or more nodes,
    kubectl taint node <node_name> List all nodes in your Kubernetes cluster:
    kubectl get node Remove a specific node from your Kubernetes cluster,
    kubectl delete node <node_name> Display resource usage (CPU and memory) for all nodes in your Kubernetes cluster:
    kubectl top nodes List all pods running on a node with a specific name:
    kubectl get pods -o wide | grep <node_name> Add or update annotations on a specific node:
    kubectl annotate node <node_name> <key>=<value> 📋Annotations are key-value pairs that can be used to store arbitrary non-identifying metadata.Mark a node as unschedulable (no new pods will be scheduled on the specified node).
    kubectl cordon node <node_name> Mark a previously cordoned (unschedulable) node as schedulable again:
    kubectl uncordon node <node_name> Safely evict all pods from the specified node in preparation for maintenance or decommissioning:
    kubectl drain node <node_name> Add or update labels on a specific node in your Kubernetes cluster:
    kubectl label node <node_name> <key>=<value> Pods
    A pod is the smallest and simplest unit in the Kubernetes object model that you can create or deploy. A pod represents a single instance of a running process in your cluster and can contain one or more containers. These containers share the same network namespace, storage volumes, and lifecycle, allowing them to communicate with each other easily and share resources.
    Pods are designed to host tightly coupled application components and provide a higher level of abstraction for deploying, scaling, and managing applications in a Kubernetes environment. Each pod is scheduled on a node, where the containers within it are run and managed together as a single, cohesive unit.
    List all pods in your Kubernetes cluster:
    kubectl get pods List all pods in your Kubernetes cluster and sort them by the restart count of the first container in each pod:
    kubectl get pods --sort-by='.status.containerStatuses[0].restartCount' List all pods in your Kubernetes cluster that are currently in the "Running" phase:
    kubectl get pods --field-selector=status.phase=Running Delete a specific pod from your Kubernetes cluster:
    kubectl delete pod <pod_name> Display detailed information about a specific pod in your Kubernetes cluster:
    kubectl describe pod <pod_name> Create a pod using the specifications provided in a YAML file:
    kubectl create -f pod.yaml OR kubectl apply -f pod.yaml To execute a command in a specific container within a pod in your Kubernetes cluster:
    kubectl exec <pod_name> -c <container_name> <command> Start an interactive shell session in a container within a specified pod:
    # For Single Container Pods kubectl exec -it <pod_name> -- /bin/sh # For Multi-container pods, kubectl exec -it <pod_name> -c <container_name> -- /bin/sh Display resource (CPU and memory) usage statistics for all pods in your Kubernetes cluster:
    kubectl top pods Add or update annotations on a specific pod:
    kubectl annotate pod <pod_name> <key>=<value> To add or update the label of a pod:
    kubectl label pod <pod_name> new-label=<label name> List all pods in your Kubernetes cluster and display their labels:
    kubectl get pods --show-labels Forward one or more local ports to a pod in your Kubernetes cluster, allowing you to access the pod's services from your local machine:
    kubectl port-forward <pod_name> <port_number_to_listen_on>:<port_number_to_forward_to> Replication Controllers
    Replication Controller (RC) ensures that a specified number of pod replicas are running at any given time. If any pod fails or is deleted, the Replication Controller automatically creates a replacement. This self-healing mechanism enables high availability and scalability of applications.
    To list all Replication Controllers in your Kubernetes cluster
    kubectl get rc List all Replication Controllers within a specific namespace:
    kubectl get rc --namespace=”<namespace_name>” ReplicaSets
    ReplicaSet is a higher-level concept that ensures a specified number of pod replicas are running at any given time. It functions similarly to a Replication Controller but offers more powerful and flexible capabilities.
    List all ReplicaSets in your Kubernetes cluster.
    kubectl get replicasets To display detailed information about a specific ReplicaSet:
    kubectl describe replicasets <replicaset_name> Scale the number of replicas for a specific resource, such as a Deployment, ReplicaSet, or ReplicationController, in your Kubernetes cluster.
    kubectl scale --replicas=<number_of_replicas> <resource_type>/<resource_name> Secrets
    Secrets are used to store and manage sensitive information such as passwords, tokens, and keys.
    Unlike regular configuration files, Secrets help ensure that confidential data is securely handled and kept separate from application code.
    Secrets can be created, managed, and accessed within the Kubernetes environment, providing a way to distribute and use sensitive data without exposing it in plain text.
    To create a Secret,
    kubectl create secret (docker-registry | generic | tls) List all Secrets in your Kubernetes cluster:
    kubectl get secrets Display detailed information about a specific Secret:
    kubectl describe secret <secret_name> Delete a specific Secret from your Kubernetes cluster:
    kubectl delete secret <secret_name> Services
    Services act as stable network endpoints for a group of pods, allowing seamless communication within the cluster. They provide a consistent way to access pods, even as they are dynamically created, deleted, or moved.
    By using a Service, you ensure that your applications can reliably find and interact with each other, regardless of the underlying pod changes.
    Services can also distribute traffic across multiple pods, providing load balancing and improving the resilience of your applications.
    To list all Services in your Kubernetes cluster:
    kubectl get services To display detailed information about a specific Service:
    kubectl describe service <service_name> Create a Service that exposes a deployment:
    kubectl expose deployment <deployment_name> --port=<port> --target-port=<target_port> --type=<type> Edit the configuration of a specific Service:
    kubectl edit service <service_name> Service Accounts
    Service Accounts provide an identity for processes running within your cluster, enabling them to interact with the Kubernetes API and other resources. By assigning specific permissions and roles to Service Accounts, you can control access and limit the actions that pods and applications can perform, enhancing the security and management of your cluster.
    Service Accounts are essential for managing authentication and authorization, ensuring that each component operates with the appropriate level of access and adheres to the principle of least privilege.
    To list all Service Accounts in your Kubernetes cluster:
    kubectl get serviceaccounts Display detailed information about a specific Service Account:
    kubectl describe serviceaccount <serviceaccount_name> Next is replacing a service account. Before replacing, you need to export the existing Service Account definition to a YAML file.
    kubectl get serviceaccount <serviceaccount_name> -o yaml > serviceaccount.yaml Once you made changes to the YAML file, replace the existing Service Account with the modified one:
    kubectl replace -f serviceaccount.yaml Delete a specific Service Account from your Kubernetes cluster:
    kubectl delete serviceaccount <service_account_name> StatefulSet
    StatefulSet is a specialized workload controller designed for managing stateful applications. Unlike Deployments, which are suitable for stateless applications, StatefulSets provide guarantees about the ordering and uniqueness of pods.
    Each pod in a StatefulSet is assigned a unique, stable identity and is created in a specific order. This ensures consistency and reliability for applications that require persistent storage, such as databases or distributed systems.
    StatefulSets also facilitate the management of pod scaling, updates, and rollbacks while preserving the application's state and data.
    To list all StatefulSets in your Kubernetes cluster:
    kubectl get statefulsets To delete a specific StatefulSet from your Kubernetes cluster without deleting the associated pods:
    kubectl delete statefulset <stateful_set_name> --cascade=false 💬 Hope you like this quick overview of the kubectl commands. Please let me know if you have any questions or suggestions.
  20. by: Sourav Rudra
    Tue, 04 Mar 2025 11:04:00 GMT

    May 5 2025, is the day Skype will cease to exist as Microsoft retires it, pushing people to switch to their Teams offering. The death of Skype was a slow one, but one that has been coming for quite some time now.
    While it may be appealing to switch to Teams, it is still Microsoft, the not-so-privacy friendly company☠️
    So, why not give open source Skype alternatives a chance instead?
    You can stop sending your data to companies, and privately communicate with your friends and colleagues.
    Join me as I take you through some solid choices for secure and reliable video communication.
    📋The list is in no particular order of ranking.1. Jami
    Jami is a popular decentralised secure communication platform that offers messaging, voice calls, and video calls.
    Unlike Skype, Jami operates on a peer-to-peer architecture, which results in enhanced privacy and reliability for the people who use it.
    There are no restrictions on the number of messages, file size to upload, and calls. You can do it all, as long as you want, without sharing any personal information with the app.
    You can read up on our detailed coverage on Jami to learn more if you're intrigued by it.
    Key Highlights
    Supports group chats End-to-end encryption Peer-to-peer networking Self-hostable Can work as an SIP client Get Jami
    You can get Jami for platforms like Linux, Android, Android TV, Windows, iOS, and macOS from the official website.
    JamiSuggested Read 📖
    Jami: A Versatile Open-Source Distributed Communication AppInstalling and using Jami to give you a walkthrough of how it works, and what you can expect from it.It's FOSSAnkush Das2. Linphone
    As an open source VoIP (Voice over IP) application, Linphone enables high-quality audio and video calls using the SIP protocol. It is an ideal choice for both enterprises and organisations who prefer a secure, reliable way of communicating.
    You can choose between white label, open source, and proprietary license giving you flexibility on what you want.
    Key Highlights
    Supports VoIP calls Multi-platform support Leverages SIP protocol White label, open source, and proprietary license options Get Linphone
    The official website hosts the packages for Linux, Android, Windows, iOS, and macOS. I had to sign up for a Linphone account to test things out, but there is also the option to connect to a third-party SIP account.
    Linphone3. Jitsi Meet Online
    If you are someone who likes a communication solution that works in your web browser and mobile phones, then Jitsi Meet Online can be a good choice for you. It is a free, open source videoconferencing service from Jitsi that facilitates secure, encrypted meetings, with many useful features like chat, screen sharing, and recording.
    You can also choose to host your instance, and customize it to your heart's content. It has been one of the best open source video conferencing solutions out there.
    Key Highlights
    Unlimited participants Minimal account setup Supports screen sharing Self-hostable Get Jitsi Meet Online
    For communicating with others, you can either self-host Jitsi or directly start a meet on Jitsi Meet Online. Keep in mind that you have to create an account if you want a moderated video meet, and have the meeting URL booked in advance.
    Jitsi Meet Online4. MiroTalk
    Similar tp Jitsi Meet, we have MiroTalk, a browser-based videoconferencing tool that uses WebRTC for real-time communication. Anyone who uses MiroTalk can expect to take advantage of an interactive whiteboard, seamless file sharing, and low-latency audio/video calls.
    Key Highlights
    Can be self-hosted No installation required Low-latency connections Get MiroTalk
    Being a no-nonsense service, using MiroTalk is as simple as visiting the official website and joining a room. You just have to enter a display name and click on "Join meeting".
    There's ChatGPT integration as well, though I would want to stay away from that.
    MiroTalk5. Element
    Element web in action.Element is one of the best Matrix clients, enabling decentralized and secure communication over text, voice, and video. It can be a great Skype alternative if you find the other options on this list to be too complicated.
    Users can either self-host or sign up with Element’s free hosted service for their account. These approaches ensure flexibility and control over one's data.
    Element offers end-to-end encryption, allowing users to have private conversations without a third party snooping in. It also supports file sharing, group chats, and integration with other services.
    Key Highlights
    Easy file transfers Self-hostable Powered by Matrix protocol Supports voice and video calls Get Element
    You can get Element for both desktop and mobile, with apps being made available for Linux, Android, Windows, iOS, and macOS. There is also Element web if you don't prefer installing apps.
    Element9 Best Matrix Clients for Decentralized MessagingMatrix clients help you experience secure, and decentralized messaging. Here are the best options if you want to use it.It's FOSSAnkush Das6. Wire
    While Wire has a strong focus on team communication, it can still be used for personal use to connect with other people over text, voice, or video. The platform provides end-to-end encryption by default, while offering a user-friendly experience for everyone who tries it.
    If you are an organisation looking for a secure alternative to Skype, this can be a great option to explore.
    Key Highlights
    End-to-end encryption Supports multi-device sync Real-time team collaboration Get Wire
    Before you proceed, please ensure that you have a Personal account configured if you are an individual user. I say that because, during my testing, I mistakenly went for a team account during the onboarding. Applications can be downloaded for Linux, Android, Windows, iOS, and macOS from the official website. There is also a web version for those who prefer webapps.
    Wire7. Nextcloud Talk
    Nextcloud is a versatile open-source remote working tool, and Nextcloud Talk provides excellent video conferencing and communication capabilities. If you have a Nextcloud server, the Talk portal offers features like screen sharing and messaging, fulfilling your video conferencing needs.
    However, the only catch here is, setting up a Nextcloud instance requires some technical expertise.
    Key Highlights
    Screen sharing Messaging Integrated into the Nextcloud ecosystem Self-hostable Get Nextcloud Talk
    To utilize Nextcloud Talk, you need a Nextcloud server. Once your server is live, Nextcloud Talk can be used for video conferencing and communication.
    Nextcloud TalkWrapping Up
    It is easy to replace Microsoft's Skype in 2025, with all the open source solutions out there.
    Furthermore, the open source options offer way better control, customizations, and privacy-friendly features to give you a better user experience
    💬 If you feel that I missed any good alternatives, please let me know in the comments. What will you be choosing to replace Microsoft's Skype? Let me know!


  21. by: Chris Coyier
    Mon, 03 Mar 2025 16:30:37 +0000

    I’ve been a bit sucked into the game Balatro lately. Seriously. Tell me your strategies. I enjoy playing it equally as much lately as unwinding watching streamers play it on YouTube. Balatro has a handful of accessibility features. Stuff like slowing down or turning off animations and the like. I’m particularly interested one of the checkboxes below though:
    “High Contrast Cards” is one of the options. It’s a nice option to have, but I find it particularly notable because of it’s popularity. You know those streamers I mentioned? The all seem to have this option turned on. Interesting how an “accessibility feature” actually seems to make the game better for everybody. As in, maybe the default should be reversed or just not there at all, with the high contrast version being just how it is.
    It reminds me about how half of Americans, particularly the younger generation, prefer having closed captioning on TV some or all of the time. An accessibility feature that they just prefer.
    Interestingly, the high contrast mode in Balatro mostly focuses on changing colors.
    If you don’t suffer from any sort of colorblindness (like me? I think?) you’ll notice the clubs above are blue, which differentiates them from the spades which remain black. The hearts and clubs are slightly differentiated with the diamonds being a bit more orange than red.
    Is that enough? It’s enough for many players preferring it, likely preventing accidentally playing a flush hand with the wrong suits, for example. But I can’t vouch for if it works for people with actual low vision or a type of color blindness, which is what I’d assume would be the main point of the feature. Andy Baio wrote a memorable post about colorblindness a few years ago called Chasing rainbows. There are some great examples in that post that highlight the particular type of colorblindness Andy has. Sometimes super different colors look a lot closer together than you’d expect, but still fairly distinct. Where sometimes two colors that are a bit different actually appear identical to Andy.
    So maybe the Balatro colors are enough (lemme know!) or maybe they are not. I assume that’s why a lot of “high contrast” variations do more than color, they incorporate different patterns and whatnot. Which, fair enough, the playing cards of Balatro already do.
    Let’s do a few more fun CSS and color related links to round out the week:
    Adam Argyle: A conic gradient diamond and okLCH — I’m always a little surprised at the trickery that conic gradients unlock. Whenever I think of them I’m like uhmmmmm color pickers and spinners I guess? Michelle Barker: Messing About with CSS Gradients — Layered gradients unlocking some interested effects and yet more trickery. Michelle Barker: Creating color palettes with the CSS color-mix() function — Sure, color-mix() is nice for a one-off where you’re trying to ensure contrast or build the perfect combo from an unknown other color, but it can also be the foundational tool for a system of colors. Keith Grant: Theme Machine — A nice take on going from choosing nice individual colors to crafting palettes, seeing them in action, and getting custom property output for CSS.
  22. by: Tatiana P
    Mon, 03 Mar 2025 15:22:06 +0000

    We can no longer say that the jobs will stay the same 10 years from now, so we need to constantly re-evaluate our options based on reality and what is available out there.
    About me
    I am Krittika Varmann. I am a Senior Cloud and AI Developer for F-Secure. I am an engineer at heart, drawn to solving problems simply for the joy of the process -sometimes, the journey matters more to me than the destination.
    I have always been eager to see my work have a real-world impact, and I strongly believe in maintaining a balance between work, health, hobbies, and relationships. These values have significantly shaped my career path and life choices. Instead of pursuing theoretical research or academia, I gravitated toward industrial innovations, applying AI to solve real-life challenges.
    Beyond engineering and AI, I am endlessly fascinated by human behaviour and technology. Whether it is cognitive biases, effective communications, or the art of persuasion, I love exploring how psychology intersects everyday life and business. You will often find me immersed in books on these topics. In my free time, I enjoy playing board games with friends, going to the sauna, winter swimming, and baking.
    On my path to F-Secure
    I have seven years of experience in data, cloud & AI, with a career shaped by curiosity, adaptability, and a motivation to stay ahead of industry trends.
    My journey began as a researcher and project coordinator at the University of Eastern Finland in Joensuu. From there, I transitioned to Kone, where I worked as a data scientist before shifting into data engineering. I made this transition for two key reasons. First, I wanted to develop hands-on skills by working across the entire data pipeline -from start to finish- so I could build and manage solutions independently. Second, I saw data engineering to future-proof my career, keeping myself highly employable and aligned with in-demand roles.
    After four years at Kone, I moved to Sanoma, where I worked as a cloud engineer for about a year and a half. Then, four months ago, I transitioned to F-Secure, a move driven by careful deduction, prioritization, and a clear vision of what I want from my career and life.
    The main reason for me joining F-Secure was the cybersecurity domain of the company. As technology evolves and data becomes even more valuable, security threats are increasing. Cybersecurity, in my opinion, will only grow in importance in the coming years.
    F-Secure is an incredible place to work, and what I appreciate most is that the product itself is software. The company’s core focus is on building high-quality, industry-standard code, which aligns perfectly with my values as an engineer. The role itself also allows me to bring together all the skills I have acquired through the shifts in my career: API development, cloud infrastructure, ML modelling, MLOps, testing & writing production-grade code.
    The people I work with have fantastic work ethics, so I have much to feel inspired by my colleagues. And as I said, cybersecurity is such a critical domain in this world right now. I see this as an opportunity to make a real-world impact and contribute to a vital field in protecting information.
    Krittika Varmann, Senior Cloud and AI Developer, F-Secure
    The beginning of my studies
    I have a bachelor’s in printing engineering, where I explored media, ink, paper, and printing technologies. During my studies, I completed two internships -one at a packaging company in China and another at a German startup researching electronic ink through the DAAD scholarship.
    Later, I received an Erasmus Mundus scholarship for my master’s in colour science: a dual degree in optics and computer science across France, Spain, and Finland. My thesis focused on applying AI to smart lighting solutions. Upon arriving in Finland, I immediately felt at home -the direct culture and strong work-life balance resonated with me, leading me to stay.
    Strategic Approach to Learning
    Strategic decisions have shaped my academic and career choices. Coming from a highly competitive environment in India, I sought alternative fields where I could stand out. Instead of pursuing computer science or electronics engineering, I chose printing engineering, where competition was lower, allowing me to excel in this field and still have time to acquire other skills like studying German and Mandarin. I proactively reached out to professors worldwide, securing unique job opportunities. These experiences had a snowball effect -my internship in China and my German language strengthened my application for the DAAD scholarship which in turn positively affected my application for the Erasmus Mundus scholarship to do my master’s degree.
    Maximizing Future Options
    My advice for those starting is to maintain a curious mindset and keep pushing to create and expand future opportunities at every step. When you begin, you have fewer options, but you can expand opportunities for the future so that you have options to choose from.
    Sometimes, I’ve seen people start with something, get stuck with it, and spend many years doing it. I’ve seen people staying in the same work for 20 years, and that’s not necessarily bad for somebody who wants that. But for me, that narrows down my options.
    I believe in maximizing future opportunities rather than getting confined to a single domain. This philosophy aligns with Morgan Housel’s powerful definition of wealth from ‘The Psychology of Money’: ‘Wealth is having the freedom to do what you want, when you want, with whom you want, for as long as you want.’ I strive to maximize this kind of freedom in my career. Many people stay in one job for decades, which works for some, but I prefer versatility. If I ever wake up feeling unfulfilled, I want the flexibility to switch paths. Without the foundational work I put in early on, making transitions would have felt overwhelming. However, transitions have become much smoother and more natural for me because I developed the habit of adapting and exploring different domains from the start.
    We can no longer say that the jobs will stay the same 10 years from now, so we need to constantly re-evaluate our options based on reality and what is actually available out there. We must be ready to jump to things and do things that don’t make us feel comfortable. If we want to thrive in this fast-changing environment, we must keep adapting and pushing ourselves to take on new challenges.
    In a world where technology evolves rapidly and jobs become obsolete, adaptability is crucial. You can either stay in your comfort zone and risk stagnation or embrace change and continuously challenge yourself. Both come with struggles, and choosing the struggle that aligns with your long-term goals is key.
    Networking tips from Krittika
    Value Networking in Tech Brings
    The tech industry is quite tightly knit, especially in Finland, where the community is small. After a while, you’ll find everybody knows everybody. It’s not uncommon to hear a Finn say, “Oh, I know him from back in the Nokia days!”
    This, along with the concept of Nordic Trust, meaning there is a great deal of trust in recommendations, business operations, and general dealings in Nordic communities, can significantly influence your career prospects.
    As a personal example, after finishing my third semester studying in Finland, I had the opportunity to pursue my master’s thesis anywhere in the world, but by then, I had already decided to stay in Finland and opted for an industrial Master Thesis rather than an academic one. I did it with a Finnish innovation company, and later, when I applied for a position at Kone, my former manager saw my supervisor’s name and said, “Oh, I know him from Nokia times. Mind if I give him a call?” My supervisor gave me a stellar recommendation, and I was hired.
    Preparing for Networking
    I usually research the person and find something in their bio/LinkedIn/resume/publications/websites that could serve as good common ground. I use that to start a conversation, e.g., “Oh, I noticed you worked on satellites at XYZ company. That seemed interesting to me. I also worked on satellites during my astronomy lessons. I was wondering how you solved ABC using XYZ technology?” Then I find a way to segue into something else.
    Finding Offline Networking Opportunities
    For tech-related networking, I usually attend AWS Meetup groups and other similar events like Confluence-led meetups or Terraform meetups.
    I’ve also gained many networking benefits by attending the AWS Stockholm Summit, where I met AWS experts in the Nordics while improving my knowledge base on cloud technologies.
    There are also opportunities to participate in hackathons like Junction, hosted by Aalto University along with other companies. It’s a great way to get noticed by companies.
    A huge source of networking for me has been being part of the Finland Young Professional group. Unfortunately, the only way to join is by working for a company that is part of the network, but the good news is that many tech companies in Finland are already members. FYP frequently hosts events like the Hiimos ski break and the Tallinn trip, where group activities turn professional contacts into close friends.
    I would suggest finding events where you can showcase your projects and talk about what you have learned and how they can benefit real-life problem-solving.
    Podcasts, blogs, and online content shared on social media are also great directions.
    Maintaining & Nurturing Professional Networks
    The most important thing when you meet someone is to add them on LinkedIn. This opens many channels as your network grows, and someone who knows someone can often help you out when in need.
    I’m active in the LinkedIn community, and when a contact posts updates, I often react and comment with encouraging words. This helps keep the connection fresh and reinforces the acquaintance.
    I’ve changed companies a couple of times but made sure to keep in touch with former colleagues. Every few months, I suggest going out for dinner, which has kept those bonds alive.
    Being genuinely curious about people, adding them on LinkedIn, following up after networking events, and being proactive in organizing or attending tech events, even if you don’t know anyone there, is a great way to get out of your comfort zone.
    Networking Mistakes
    A few mistakes one could make in networking include: being overly self-promotional, being too aggressive (pushing for referrals or jobs), not expanding beyond your comfort zones, and the biggest one: not preparing for networking events. Showing up without knowing who will be there and having some conversation starters ready can lead to missed opportunities.
    Best Advice That Helped Advance My Career
    The best advice would be: In life, you get exactly what you ask for. If you don’t know what you want, how to formulate it, or how to ask for it, it’s highly unlikely you’ll ever get it. I’ve never shied away from asking for more, and even one proactive action can lead to ripple effects for the future.
    For example, I was a highly inexperienced bachelor’s student in my home country and somehow managed to get into a printing press as a summer intern. There was a visiting technician from the UK who was fixing a packaging machine, and I stood next to him every day, observing and taking notes on what he did. He noticed my curiosity, and when I asked him if he knew of any internship opportunities abroad, he immediately gave me a contact in China. At just 20 years old, I got to do an internship abroad in a production facility, which helped my Master’s applications in Europe. I often think back to that day fondly. If I had never asked for that opportunity, I might not have ended up in Finland.
    Gaining More Confidence
    I recommend reading How to Win Friends and Influence People by Dale Carnegie. The book talks about several tips for improving communication, such as showing curiosity about other people’s lives, remembering people’s names, and listening actively.
    Another book worth reading, especially for introverts, is The Subtle Art of Not Giving a Fck* by Mark Manson. It talks about many mechanisms for navigating life better. A relevant tip is The Spotlight Effect, which, from psychology, means that when we are in public, we tend to think everyone is looking at us. The paradox is that, since everybody thinks this way, they are more aware of themselves than of you. It’s more likely you remember making a fool of yourself than they do. You can use this information to your advantage because it frees you from the fear of making mistakes and helps you stop taking yourself so seriously.
    Finally, the last thing I want to say is that, as a proponent of gender-balanced opportunities, research shows that women have been raised not to fail, whereas men have been raised with more freedom to make mistakes. I use this information to inspire myself because a man wouldn’t hesitate to approach a stranger and start a conversation, possibly making mistakes in his career. But women tend to overthink and underdo, which prevents them from even getting those opportunities in the first place. In my opinion, it’s far better to get an opportunity and make a mistake than to try to be perfect and never take any chances at all.
    Amplifying Your Voice as a Woman in Tech
    Two pieces of advice that stuck with me from Pia Nilsson, Director of Engineering at Spotify, during the Stockholm Summit:
    1. If you, as a woman, are doing a lot of glue jobs like organizing social events that contribute positively to the workplace atmosphere, make sure your upper management is aware of that and your role in it. Also, try to set up a rotation cycle so you’re not stuck doing it all by yourself.
    2. If, after 4 years of being an engineer, you’re asked to become lead, imagine a few years down the line when you try to lead a team of engineers with 10-14 years of experience. You’ll have no clout to lead them, and you’ll become a default “people leader.”
    Also, I believe in the mantra Let action speak louder than words. Go get that certification, go make mistakes, go learn from them, and use facts and reason when discussing problems with colleagues. When you make good points, it’s hard to negate them.
    The Constantly Evolving Tech Industry
    The best thing (and sometimes the worst) about the tech industry is that it’s constantly evolving. This means a person can have 10 years’ experience coding in Java and still not get a job in software engineering because they don’t know Python or have no experience with other coding languages.
    Most tech people are self-taught, and being in tech is largely about being willing to fail, pick yourself up, and start again the next day. Not to undermine seniority, but if there’s any industry where fast learning skills often overtake experience, it’s tech.
    There are many stories of young tech startup founders becoming big shots overnight because they dared to dream and take actions, even without experience. Everybody starts somewhere, and it’s never too late to start.
    In general, my motto in life is not to compare myself to others but to simply remember we are all on our own journeys and paths.
    It’s also about an abundance mindset: There are enough resources and opportunities out there for each of us. There’s no need to compete. All you need is one spot, one opportunity, one person to believe in you. If someone has more experience, it means they’ve put in their time and effort to get there, and if you want to be like them, you must be willing to put in the work and fail just as they have. Let that inspire you rather than demotivate you.
    Also, remember you bring unique skills and talents to the table, so there’s no need to be anyone else. Make it your own.
    The post Role model blog: Krittika Varmann, F-Secure first appeared on Women in Tech Finland.
  23. Functions in CSS?!

    by: Juan Diego Rodríguez
    Mon, 03 Mar 2025 13:34:22 +0000

    A much-needed disclaimer: You (kinda) can use functions now! I know, it isn’t the most pleasant feeling to finish reading about a new feature just for the author to say “And we’ll hopefully see it in a couple of years”. Luckily, right now you can use an (incomplete) version of CSS functions in Chrome Canary behind an experimental flag, although who knows when we’ll get to use them in a production environment.
    Arguments, defaults, and returns!
    I was drinking coffee when I read the news on Chrome prototyping functions in CSS and… I didn’t spit it or anything. I was excited, but thought “functions” in CSS would be just like mixins in Sass — you know, patterns for establishing reusable patterns. That’s cool but is really more or less syntactic sugar for writing less CSS.
    But I looked at the example snippet a little more closely and that’s when the coffee nearly came shooting out my mouth.
    From Bramus in Bluesky
    Arguments?! Return values?! That’s worth spitting my coffee out for! I had to learn more about them, and luckily, the spec is clearly written, which you can find right here. What’s crazier, you can use functions right now in Chrome Canary! So, after reading and playing around, here are my key insights on what you need to know about CSS Functions.
    What exactly is a function in CSS?
    I like this definition from the spec:
    They are used in the same places you would use a custom property, but functions return different things depending on the argument we pass. The syntax for the most basic function is the @function at-rule, followed by the name of the function as a <dashed-ident> + ()
    @function --dashed-border() { /* ... */ } A function without arguments is like a custom property, so meh… To make them functional we can pass arguments inside the parenthesis, also as <dashed-ident>s
    @function --dashed-border(--color) { /* ... */ } We can use the result descriptor to return something based on our argument:
    @function --dashed-border(--color) { result: 2px dashed var(--color); } div { border: --dashed-border(blue); /* 2px dashed blue */ } Functions can have type-checking
    Functions can have type-checking for arguments and return values, which will be useful whenever we want to interpolate a value just like we do with variables created with @property, and once we have inline conditionals, to make different calculations depending on the argument type.
    To add argument types, we pass a syntax component. That is the type enclosed in angle brackets, where color is <color> and length is <length>, just to name a couple. There are also syntax multipliers like plus (+) to accept a space-separated list of that type.
    @function --custom-spacing(--a <length>) { /* ... */ } /* e.g. 10px */ @function --custom-background(--b <color>) { /* ... */ } /* e.g. hsl(50%, 30% 50%) */ @function --custom-margin(--c <length>+) { /* ... */ } /* e.g. 10px 2rem 20px */ If instead, we want to define the type of the return value, we can write the returns keyword followed by the syntax component:
    @function --progression(--current, --total) returns <percentage> { result: calc(var(--current) / var(--total) * 100%); } Just a little exception for types: if we want to accept more than one type using the syntax combinator (|), we’ll have to enclose the types in a type() wrapper function:
    @function --wideness(--d type(<number> | <percentage>)) { /* ... */ } Functions can have list arguments
    While it doesn’t currently seem to work in Canary, we’ll be able in the future to take lists as arguments by enclosing them inside curly braces. So, this example from the spec passes a list of values like {1px, 7px, 2px} and gets its maximum to perform a sum.
    @function --max-plus-x(--list, --x) { result: calc(max(var(--list)) + var(--x)); } div { width: --max-plus-x({ 1px, 7px, 2px }, 3px); /* 10px */ } I wonder then, will it be possible to select a specific element from a list? And also define how long should the list should be? Say we want to only accept lists that contain four elements, then select each individually to perform some calculation and return it. Many questions here!
    Early returns aren’t possible
    That’s correct, early returns aren’t possible. This isn’t something defined in the spec that hasn’t been prototyped, but something that simply won’t be allowed. So, if we have two returns, one enclosed early behind a @media or @supports at-rule and one outside at the end, the last result will always be returned:
    @function --suitable-font-size() { @media (width > 1000px) { result: 20px; } result: 16px; /* This always returns 16px */ } We have to change the order of the returns, leaving the conditional result for last. This doesn’t make a lot of sense in other programming languages, where the function ends after returning something, but there is a reason the C in CSS stands for Cascade: this order allows the conditional result to override the last result which is very CSS-y is nature:
    @function --suitable-font-size() { result: 16px; @media (width > 1000px) { result: 20px; } } Imagining the possibilities
    Here I wanted everyone to chip in and write about the new things we could make using functions. So the team here at CSS-Tricks put our heads together and thought about some use cases for functions. Some are little helper functions we’ll sprinkle a lot throughout our CSS, while others open new possibilities. Remember, all of these examples should be viewed in Chrome Canary until support expands to other browsers.
    Here’s a basic helper function from Geoff that sets fluid type:
    @function --fluid-type(--font-min, --font-max) { result: clamp(var(--font-min), 4vw + 1rem, var(--font-max)); } h2 { font-size: --fluid-type(24px, 36px); } CodePen Embed Fallback This one is from Ryan, who is setting the width with an intrinsic container function — notice the default arguments.
    @function --intrinsic-container(--inline-margin: 1rem, --max-width: 60ch) { result: min(100% - var(--inline-margin), var(--max-width)); } CodePen Embed Fallback This one is from moi. When I made that demo using tan(atan2()) to create viewport transitions, I used a helper property called --wideness to get the screen width as a decimal between 0 to 1. At that moment, I wished for a function form of --wideness. As I described it back then:
    I thought I would never see it, but now I can make it myself! Using that wideness function, I can move an element through its offset-path as the screen goes from 400px to 800px:
    .marker { offset-path: path("M 5 5 m -4, 0 a 4,4 0 1,0 8,0 a 4,4 0 1,0 -8,0"); /* Circular Orbit */ offset-distance: calc(--wideness(400, 800) * 100%); /* moves the element when the screen goes from 400px to 800px */ } CodePen Embed Fallback What’s missing?
    According to Chrome’s issue on CSS Functions, we are in a super early stage since we cannot:
    …use local variables. Although I tried them and they seem to work. …use recursive functions (they crash!), …list arguments, …update a function and let the appropriate styles change, …use @function in cascade layers, or in the CSS Object Model (CSSOM), …use “the Iverson bracket functions … so any @media queries or similar will need to be made using helper custom properties (on :root or similar).” After reading what on earth an Iverson bracket is, I understood that we currently can’t have a return value behind a @media or @support rule. For example, this snippet from the spec shouldn’t work:
    @function --suitable-font-size() { result: 16px; @media (width > 1000px) { result: 20px; } } Although, upon testing, it seems like it’s supported now. Still, we can use a provisional custom property and return it at the end if it isn’t working for you:
    @function --suitable-font-size() { --size: 16px; @media (width > 600px) { --size: 20px; } result: var(--size); } CodePen Embed Fallback What about mixins? Soon, they’ll be here. According to the spec:
    In conclusion…
    I say it with confidence: functions will bring an enormous change to CSS, not in the sense that we’ll write it any differently — we won’t use functions to center a <div>, but they will simplify hack-ish CSS and open a lot of new possibilities. There’ll be a time when our cyborg children ask us from their education pods, “Is it true you guys didn’t have functions in CSS?” And we’ll answer “No, Zeta-5 ∀umina™, we didn’t” while shedding a tear. And that will blow their ZetaPentium© Gen 31 Brain chips. That is if CSS lasts long enough, but in the meantime, I am happy to change my site’s font with a function.
    Functions in CSS?! originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  24. by: Neeraj Mishra
    Sun, 02 Mar 2025 10:41:00 +0000

    Here you will get list of some best github alternatives that provide private and public repository.
    Being into software development we very often find ourselves in the need to host our code to any website. For the purpose, masses are blindly following one single medium for this, Github. It can not be denied that Github users have their choice to use either Git or Subversion for version control. Also there is a facility of unlimited public code repository for all users of Github. One more fascinating feature of Github is that allows to create ‘organizations’, which at its own is a normal account but at least one user account is required to be listed as the owner of the organization.
    Apart from providing desktop apps for Windows and OSX, Github also provides the facility to its users and organizations to host one website and unlimited project pages for free on the Github’s website. The typical domain for the hosted websites look something like username.github.io and address of the project pages may look like username.github.io/project-page.
    Moving ahead, we have compiled a list of few other mediums that can also be used in place Github without any harm. So come let’s have a look on the list.
    7 Best Github Alternatives
    1. Bitbucket

    On contrary to the Github, the Bitbucket comes just next to it in terms of usage and global popularity. Bitbucket also provides a free account for the users and organizations as well with limit for five users. Also, it provides access to unlimited private and public repos. One of the features which is note worthy is its allowance for the users to puch their files using any of the Git client/Git command line.
    Atlassian is the developer of Bitbucket providing access to the version capability to the users using their web interface. A free Mac and Windows interface is also available for using Gitbucket’s own Git and Mercurial client Source Tree.
    The domain for your hosted website on Bitbucket will look something like: accountname.bitbucket.org and domain for that of project pages will be like: accountname.bitbucket.org/project. On the other hand Bitbucket also allows its users to use their own domain name for their website.
    2. Beanstalk

    Beanstalk as another good Github alternative but it is not free. You can get a trial of the resource for two weeks after which if you wish to continue you will have a pay an amount of minimum $15 for its cheapest Bronze package. Bronze package lets you have maximum of 10 repositories with 3 Gigabytes of storage capacity and maximum upto 5 users.
    Beanstalk supports the most demanded Git and Subversion control systems for version control. It is developed by Wildbit and also allows for code editing in the browser itself so that user need not to switch to command line every now and then.
    3. GitLab

    GitLab is popular among the users due to its features like dedicated project website and an integrated project wiki. Also GitLab facilitates its users by providing automated testing and code delivery so that a user can do more work in lesser time without waiting for the tests to pass manually. Some of the else features to be noted are pull requests, code viewer and merge conflict resolution.
    4. Kiln

    Developed by Fog Creek, unlike Github Kiln is not a free source to host your software or website. You can have an overview or experience of their version control and code hosting for Git and Mercurial for 30 days trial period, after that users need to upgrade to the premium version (minimum $18 a month) inorder to continue working with Kiln. Kiln also charges its users for the code review module separately.
    If you host your website with Kiln, your domain will look something like this:
    companyname.kilnhg.com
    5. SourceForge

    It is believed by observing abundance of projects being hosted on the SourceForge that it has existed for a longer time. When compared to the Github, SourceForge (developed by Slashdot Media) has an entirely different structure of the project. Unlike other websites for version control, SourceForge allows you to host both static and dynamic pages as well. One of the vulnerability of this medium for version control is that a user is allowed to create projects and get it hosted on the site with unique names only.
    Typical domain for your hosted project will look like proj.sourceforge.net
    Scripting languages like Python, Perl, PHP, Tcl, Ruby and Shell are being supported by the SourceForge servers. Users are free to choosing either Git, Subversion or Mercurial for the version control system.
    6. Cloud Source by Google

    This Google’s Git version control came into existence and moved to the Google Cloud platform when Google code was put out of the market by google itself.  Although google provides its own repositories to work upon, but you can even connect the Cloud Source to other version control mediums like Github, Bitbucket, etc. Cloud Source offers storage for its users codes and apps across the google infrastructure itself which makes it even more reliable. Users have the freeship to search their code in the browser itself and also gets feature of cloud diagnostics to track the problems while code keeps running in the background.
    Cloud Source offers Stackdriver Debugger that helps use the debugger in parallel with the other applications running.
    7. GitKraken

    GitKraken became popular among the developers day by day due to the exclusive features it provides to it users are just adorable. The primary point of attraction towards Gitkraken is its beautiful interface and also it focus on speed and ease of use for Git. GitKraken comes with an incredibly handy ‘undo’ button which helps its users to quickly omit the redundancies occurred by mistake. GitKraken provides a free version which can have upto 20 users and a premium version as well with several other good features.
    We hope you guys enjoyed learning with us. If any doubts, queries or suggestions please lets us know in the comment section below. Do share in comments if you know any other good github alternatives.
    The post 7 Best Github Alternatives in 2025 appeared first on The Crazy Programmer.
  25. Rediscovering Plan9 from Bell Labs

    by: Bill Dyer


    During a weekend of tidying up - you know, the kind of chore where you’re knee-deep in old boxes before you realize it. Digging through the dusty cables and old, outdated user manuals, I found something that I had long forgotten: an old Plan9 distribution. Judging by the faded ink and slight warping of the disk sleeve, it had to be from around 1994 or 1995.
    I couldn’t help but wonder: why had I kept this? Back then, I was curious about Plan9. It was a forward-thinking OS that never quite reached full potential. Holding that disk, however, it felt more like a time capsule, a real reminder of computing’s advancements and adventurous spirit in the 1990s.
    What Made Plan9 So Intriguing Back Then?
    In the 1990s, Bell Labs carried an almost mythical reputation for me. I was a C programmer and Unix system administrator and the people at Bell Labs were the minds behind Unix and C, after all. When Plan9 was announced, it felt like the next big thing. Plan9 was an operating system that promised to rethink Unix, not just patch it up. The nerd in me couldn’t resist playing with it.
    A Peek Inside the Distro
    Booting up Plan9 wasn’t like loading any other OS. From the minimalist Rio interface to the “everything is a file” philosophy taken to its extreme, it was clear this was something different.
    Some standout features that left an impression:
    9P Protocol: I didn’t grasp its full potential back then, but the idea of treating every resource as part of a unified namespace was extraordinary.
    Custom Namespaces: The concept of every user having their own view of the system wasn’t just revolutionary; it was downright empowering.
    Simplicity and Elegance: Even as a die-hard Unix user, I admired Plan9's ability to strip away the cruft without losing functionality.
    Looking at Plan9 Today
    Curiosity got the better of me, and I decided to see if the disk still worked. Spoiler: it didn’t.
    But thanks to projects like 9front, Plan9 is far from dead. I was able to download and image and fire it up in a VM. The interface hasn't aged well compared to modern GUIs, but its philosophy and design still feels ahead of its time.

    As a seasoned (read: older) developer, I’ve come to appreciate things I might have overlooked in the 1990s:
    Efficiency over bloat: In today’s world of resource-hungry systems, Plan9’s lightweight design is like a breath of fresh air.
    Academic appeal: Its clarity and modularity makes Plan9 and outstanding teaching tool for operating system concepts.
    Timeless innovations: Ideas like distributed computing and namespace customization feels even more pertinent in this era of cloud computing.
    Why didn’t Plan9 take off?
    Plan9 was ahead of its time, which often spells doom for innovative tech. Its radical departure from Unix made it incompatible with existing software. And let’s face it - developers were (and still are) reluctant to ditch well-established ecosystems.
    Moreover, by the 1990s, Unix clones, such as Linux, were gaining traction. Open-source communities rallied around Linux, leaving Plan9 with a smaller, academic-focused user base. It just didn't have the commercial/user backup.
    Plan9’s place in the retro-computing scene
    I admit it: I can get sappy and nostalgic over tech history. Plan9 is more than a relic; it’s a reminder of a time when operating systems dared to dream big. It never achieved the widespread adoption of Unix or Linux, but it still has a strong following among retro-computing enthusiasts.
    Here’s why it continues to matter:
    For Developers: It’s a masterclass in clean, efficient design.
    For Historians: It’s a snapshot of what computing could have been.
    For Hobbyists: It’s a fun, low-resource system to tinker with.
    Check out the 9front project. It’s a maintained fork that modernizes Plan9 while staying true to its roots. Plan9 can run on modern hardware. It is lightweight enough to run on old machines, but I suggest using a VM; it is the easiest route.
    Lessons from years past
    How a person uses Plan9 is up to them, naturally, but I don't think that Plan9 is practical for everyday use. Plan9, I believe, is better suited as an experimental or educational platform rather than a daily driver. However, that doesn't mean that it wasn't special.
    Finding that old Plan9 disk wasn’t just a trip down memory lane; it was a reminder of why I was so drawn to computing. Plan9’s ambition and elegance is still inspiring to me, even decades later.
    So, whether you’re a retro-computing nerd, like me, or just curious about alternative OS designs, give Plan9 a run. Who knows? You might find a little magic in its simplicity, just like I did.

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.