Jump to content

Blogger

Blog Bot
  • Joined

  • Last visited

    Never

Blog Entries posted by Blogger

  1. How To Set up a Cron Job in Linux

    Cron is a time-based job scheduler that lets you schedule tasks and run scripts periodically at a fixed time, date, or interval. Moreover, these tasks are called cron jobs. With cron jobs, you can efficiently perform repetitive tasks like clearing cache, synchronizing data, system backup and maintenance, etc.
    These cron jobs also have other features like command automation, which can significantly reduce the chances of human errors. However, many Linux users face multiple issues while setting up a cron job. So, this article provides examples of how to set up a cron job in Linux.
    How To Set up a Cron Job
    Firstly, you must know about the crontab file to set up a cron job in Linux. You can access this file to view information about existing cron jobs and edit it to introduce new ones. Before directly opening the crontab file, use the below command to check that your system has the cron utility:
    sudo apt list cron

    If it does not provide an output as shown in the given image, install cron using:
    sudo apt-get install cron -y
    Now, verify that the cron service is active by using the command as follows:
    service cron status

    Once you are done, edit the crontab to start a new cron job:
    crontab -e
    The system will ask you to select a particular text editor. For example, we use the nano editor by entering ‘1’ as input. However, you can choose any of the editors because the factor affecting a cron job is its format, which we’ll explain in the next steps.
    After choosing an editor, the crontab file will open in a new window with basic instructions displayed at the top.

    Finally, append the following crontab expression in the file:
    * * * * * /path/script
    Here, each respective asterisk (*) indicates minutes, hours, daily, weekly, and monthly. This defines every aspect of time so that the cron job can execute smoothly at the scheduled time. Moreover, replace the terms path and script with the path containing the target script and the script’s name, respectively.
    Time Format to Schedule Cron Jobs
    As the time format discussed in the above command can be confusing, let’s discuss its format in brief:
    In the Minutes field, you can enter values in the range 0-59, where 0 and 59 represent the minutes visible on a clock. For an input number, like 9, the job will run at the 9th minute every hour.
    For Hours, you can input values ranging from 0 to 23. For instance, the value for 2 PM would be ’14.’
    The Day of the Month can be anywhere between 1 and 31, where 1 and 31 again indicate the first and last Day of the Month. For value 17, the cron job will run on the 17th Day of every Month.
    In place of Month, you can enter the range 1 to 12, where 1 means January and 12 means December. The task will be executed only during the Month you specify here.
    Note: The value ‘*’ means every acceptable value. For example, if ‘*’ is used in place of the minutes’ field, the task will run every minute of the specified hour.
    For example, below is the expression to schedule a cron job for 9:30 AM every Tuesday:
    30 9 * * 2 /path/script
    For example, to set up a cron job for 5 PM on weekends in April:
    0 17 * 4 0,6-7 /path/script
    As the above command demonstrates, you can use a comma and a dash to provide multiple values in a field. So, the upcoming section will explain the use of various operators in a crontab expression.
    Arithmetic Operators for Cron Jobs
    Regardless of your experience in Linux, you’ll often need to automate jobs to run twice a year, thrice a month, and more. In this case, you can use operators to modify a single cron job to run at different times.
    Dash (-): You can specify a range of values using a dash. For instance, to set up a cron job from 12 AM to 12 PM, you can enter * 0-12 * * * /path/script.
    Forward Slash (/): A slash helps you divide a field’s acceptable values into multiple values. For example, to make a cron job run quarterly, you’ll enter * * * /3 * /path/script.
    Comma (,): A comma separates two different values in a single input field. For example, the cron expression for a task to be executed on Mondays and Wednesdays is * * * * 1,3 /path/script.
    Asterisk (*): As discussed above, the asterisk represents all values the input field accepts. It means an asterisk in place of the Month’s field will schedule a cron job for every Month.
    Commands to Manage a Cron Job
    Managing the cron jobs is also an essential aspect. Hence, here are a few commands you can use to list, edit, and delete a cron job:
    The l option is used to display the list of cron jobs.
    The r option removes all cron jobs.
    The e option edits the crontab file.
    All the users of your system get their separate crontab files. However, you can also perform the above operations on their files by adding their username between the commands– crontab -u username [options].
    A Quick Wrap-up
    Executing repetitive tasks is a time-intensive process that reduces your efficiency as an administrator. Cron jobs let you automate tasks like running a script or commands at a specific time, reducing redundant workload. Hence, this article comprehensively explains how to create a cron job in Linux. Furthermore, we briefed the proper usage of the time format and the arithmetic operators using appropriate examples.
  2. by: Chris Coyier
    Mon, 03 Feb 2025 17:27:05 +0000

    I kinda like the idea of the “minimal” service worker. Service Workers can be pretty damn complicated and the power of them honestly makes me a little nervous. They are middlemen between the browser and the network and I can imagine really dinking that up, myself. Not to dissuade you from using them, as they can useful things no other technology can do.
    That’s why I like the “minimal” part. I want to understand what it’s doing extremely clearly! The less code the better.
    Tantek posted about that recently, with a minimal idea:
    That seems clearly useful. The bit about linking to an archive of the page though seems a smidge off to me. If the reason a user can’t see the page is because they are offline, a page that sends them to the Internet Archive isn’t going to work either. But I like the bit about caching and at least trying to do something.
    Jeremy Keith did some thinking about this back in 2018 as well:
    The implementation is actually just a few lines of code. A variation of it handles Tantek’s idea as well, implementing a custom offline page that could do the thing where it links off to an archive elsewhere.
    I’ll leave you with a couple more links. Have you heard the term LoFi? I’m not the biggest fan of the shortening of it because “Lo-fi” is a pretty established musical term not to mention “low fidelity” is useful in all sorts of contexts. But recently in web tech it refers to “Local First”.
    I dig the idea honestly and do see it as a place for technology (and companies that make technology) to step and really make this style of working easy. Plenty of stuff already works this way. I think of the Notes app on my phone. Those notes are always available. It doesn’t (seem to) care if I’m online or offline. If I’m online, they’ll sync up with the cloud so other devices and backups will have the latest, but if not, so be it. It better as heck work that way! And I’m glad it does, but lots of stuff on the web does not (CodePen doesn’t). But I’d like to build stuff that works that way and have it not be some huge mountain to climb.
    That eh, we’ll just sync later/whenever when we have network access is super non-trivial, is part of the issue. Technology could make easy/dumb choices like “last write wins”, but that tends to be dangerous data-loss territory that users don’t put up with. Instead data need to be intelligently merged, and that isn’t easy. Dropbox is multi-billion dollar company that deals with this and they admittedly don’t always have it perfect. One of the major solutions is the concept of CRDTs, which are an impressive idea to say the least, but are complex enough that most of us will gently back away. So I’ll simply leave you with A Gentle Introduction to CRDTs.
  3. by: Ryan Trimble
    Mon, 03 Feb 2025 15:23:37 +0000

    Suppose you follow CSS feature development as closely as we do here at CSS-Tricks. In that case, you may be like me and eager to use many of these amazing tools but find browser support sometimes lagging behind what might be considered “modern” CSS (whatever that means).
    Even if browser vendors all have a certain feature released, users might not have the latest versions!
    We can certainly plan for this a number of ways:
    feature detection with @supports progressively enhanced designs polyfills For even extra help, we turn to build tools. Chances are, you’re already using some sort of build tool in your projects today. CSS developers are most likely familiar with CSS pre-processors (such as Sass or Less), but if you don’t know, these are tools capable of compiling many CSS files into one stylesheet. CSS pre-processors help make organizing CSS a lot easier, as you can move parts of CSS into related folders and import things as needed.
    Pre-processors do not just provide organizational superpowers, though. Sass gave us a crazy list of features to work with, including:
    extends functions loops mixins nesting variables …more, probably! For a while, this big feature set provided a means of filling gaps missing from CSS, making Sass (or whatever preprocessor you fancy) feel like a necessity when starting a new project. CSS has evolved a lot since the release of Sass — we have so many of those features in CSS today — so it doesn’t quite feel that way anymore, especially now that we have native CSS nesting and custom properties.
    Along with CSS pre-processors, there’s also the concept of post-processing. This type of tool usually helps transform compiled CSS in different ways, like auto-prefixing properties for different browser vendors, code minification, and more. PostCSS is the big one here, giving you tons of ways to manipulate and optimize your code, another step in the build pipeline.
    In many implementations I’ve seen, the build pipeline typically runs roughly like this:
    Generate static assets Build application files Bundle for deployment CSS is usually handled in that first part, which includes running CSS pre- and post-processors (though post-processing might also happen after Step 2). As mentioned, the continued evolution of CSS makes it less necessary for a tool such as Sass, so we might have an opportunity to save some time.
    Vite for CSS
    Awarded “Most Adopted Technology” and “Most Loved Library” from the State of JavaScript 2024 survey, Vite certainly seems to be one of the more popular build tools available. Vite is mainly used to build reactive JavaScript front-end frameworks, such as Angular, React, Svelte, and Vue (made by the same developer, of course). As the name implies, Vite is crazy fast and can be as simple or complex as you need it, and has become one of my favorite tools to work with.
    Vite is mostly thought of as a JavaScript tool for JavaScript projects, but you can use it without writing any JavaScript at all. Vite works with Sass, though you still need to install Sass as a dependency to include it in the build pipeline. On the other hand, Vite also automatically supports compiling CSS with no extra steps. We can organize our CSS code how we see fit, with no or very minimal configuration necessary. Let’s check that out.
    We will be using Node and npm to install Node packages, like Vite, as well as commands to run and build the project. If you do not have node or npm installed, please check out the download page on their website.
    Navigate a terminal to a safe place to create a new project, then run:
    npm create vite@latest The command line interface will ask a few questions, you can keep it as simple as possible by choosing Vanilla and JavaScript which will provide you with a starter template including some no-frameworks-attached HTML, CSS, and JavaScript files to help get you started.
    Before running other commands, open the folder in your IDE (integrated development environment, such as VSCode) of choice so that we can inspect the project files and folders.
    If you would like to follow along with me, delete the following files that are unnecessary for demonstration:
    assets/ public/ src/ .gitignore We should only have the following files left in out project folder:
    index.html package.json Let’s also replace the contents of index.html with an empty HTML template:
    <!doctype html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>CSS Only Vite Project</title> </head> <body> <!-- empty for now --> </body> </html> One last piece to set up is Vite’s dependencies, so let’s run the npm installation command:
    npm install A short sequence will occur in the terminal. Then we’ll see a new folder called node_modules/ and a package-lock.json file added in our file viewer.
    node_modules is used to house all package files installed through node package manager, and allows us to import and use installed packages throughout our applications. package-lock.json is a file usually used to make sure a development team is all using the same versions of packages and dependencies. We most likely won’t need to touch these things, but they are necessary for Node and Vite to process our code during the build. Inside the project’s root folder, we can create a styles/ folder to contain the CSS we will write. Let’s create one file to begin with, main.css, which we can use to test out Vite.
    ├── public/ ├── styles/ | └── main.css └──index.html In our index.html file, inside the <head> section, we can include a <link> tag pointing to the CSS file:
    <head> <meta charset="UTF-8" /> <link rel="icon" type="image/svg+xml" href="/vite.svg" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>CSS Only Vite Project</title> <!-- Main CSS --> <link rel="stylesheet" href="styles/main.css"> </head> Let’s add a bit of CSS to main.css:
    body { background: green; } It’s not much, but it’s all we’ll need at the moment! In our terminal, we can now run the Vite build command using npm:
    npm run build With everything linked up properly, Vite will build things based on what is available within the index.html file, including our linked CSS files. The build will be very fast, and you’ll be returned to your terminal prompt.
    Vite will provide a brief report, showcasing the file sizes of the compiled project. The newly generated dist/ folder is Vite’s default output directory, which we can open and see our processed files. Checking out assets/index.css (the filename will include a unique hash for cache busting), and you’ll see the code we wrote, minified here.
    Now that we know how to make Vite aware of our CSS, we will probably want to start writing more CSS for it to compile.
    As quick as Vite is with our code, constantly re-running the build command would still get very tedious. Luckily, Vite provides its own development server, which includes a live environment with hot module reloading, making changes appear instantly in the browser. We can start the Vite development server by running the following terminal command:
    npm run dev Vite uses the default network port 5173 for the development server. Opening the http://localhost:5137/ address in your browser will display a blank screen with a green background.
    Adding any HTML to the index.html or CSS to main.css, Vite will reload the page to display changes. To stop the development server, use the keyboard shortcut CTRL+C or close the terminal to kill the process.
    At this point, you pretty much know all you need to know about how to compile CSS files with Vite. Any CSS file you link up will be included in the built file.
    Organizing CSS into Cascade Layers
    One of the items on my 2025 CSS Wishlist is the ability to apply a cascade layer to a link tag. To me, this might be helpful to organize CSS in a meaningful ways, as well as fine control over the cascade, with the benefits cascade layers provide. Unfortunately, this is a rather difficult ask when considering the way browsers paint styles in the viewport. This type of functionality is being discussed between the CSS Working Group and TAG, but it’s unclear if it’ll move forward.
    With Vite as our build tool, we can replicate the concept as a way to organize our built CSS. Inside the main.css file, let’s add the @layer at-rule to set the cascade order of our layers. I’ll use a couple of layers here for this demo, but feel free to customize this setup to your needs.
    /* styles/main.css */ @layer reset, layouts; This is all we’ll need inside our main.css, let’s create another file for our reset. I’m a fan of my friend Mayank‘s modern CSS reset, which is available as a Node package. We can install the reset by running the following terminal command:
    npm install @acab/reset.css Now, we can import Mayank’s reset into our newly created reset.css file, as a cascade layer:
    /* styles/reset.css */ @import '@acab/reset.css' layer(reset); If there are any other reset layer stylings we want to include, we can open up another @layer reset block inside this file as well.
    /* styles/reset.css */ @import '@acab/reset.css' layer(reset); @layer reset { /* custom reset styles */ } This @import statement is used to pull packages from the node_modules folder. This folder is not generally available in the built, public version of a website or application, so referencing this might cause problems if not handled properly.
    Now that we have two files (main.css and reset.css), let’s link them up in our index.html file. Inside the <head> tag, let’s add them after <title>:
    <head> <meta charset="UTF-8" /> <link rel="icon" type="image/svg+xml" href="/vite.svg" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>CSS Only Vite Project</title> <link rel="stylesheet" href="styles/main.css"> <link rel="stylesheet" href="styles/reset.css"> </head> The idea here is we can add each CSS file, in the order we need them parsed. In this case, I’m planning to pull in each file named after the cascade layers setup in the main.css file. This may not work for every setup, but it is a helpful way to keep in mind the precedence of how cascade layers affect computed styles when rendered in a browser, as well as grouping similarly relevant files.
    Since we’re in the index.html file, we’ll add a third CSS <link> for styles/layouts.css.
    <head> <meta charset="UTF-8" /> <link rel="icon" type="image/svg+xml" href="/vite.svg" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>CSS Only Vite Project</title> <link rel="stylesheet" href="styles/main.css"> <link rel="stylesheet" href="styles/reset.css"> <link rel="stylesheet" href="styles/layouts.css"> </head> Create the styles/layouts.css file with the new @layer layouts declaration block, where we can add layout-specific stylings.
    /* styles/layouts.css */ @layer layouts { /* layouts styles */ } For some quick, easy, and awesome CSS snippets, I tend to refer to Stephanie Eckles‘ SmolCSS project. Let’s grab the “Smol intrinsic container” code and include it within the layouts cascade layer:
    /* styles/layouts.css */ @layer layouts { .smol-container { width: min(100% - 3rem, var(--container-max, 60ch)); margin-inline: auto; } } This powerful little, two-line container uses the CSS min() function to provide a responsive width, with margin-inline: auto; set to horizontally center itself and contain its child elements. We can also dynamically adjust the width using the --container-max custom property.
    Now if we re-run the build command npm run build and check the dist/ folder, our compiled CSS file should contain:
    Our cascade layer declarations from main.css Mayank’s CSS reset fully imported from reset.css The .smol-container class added from layouts.csss As you can see, we can get quite far with Vite as our build tool without writing any JavaScript. However, if we choose to, we can extend our build’s capabilities even further by writing just a little bit of JavaScript.
    Post-processing with LightningCSS
    Lightning CSS is a CSS parser and post-processing tool that has a lot of nice features baked into it to help with cross-compatibility among browsers and browser versions. Lightning CSS can transform a lot of modern CSS into backward-compatible styles for you.
    We can install Lightning CSS in our project with npm:
    npm install --save-dev lightningcss The --save-dev flag means the package will be installed as a development dependency, as it won’t be included with our built project. We can include it within our Vite build process, but first, we will need to write a tiny bit of JavaScript, a configuration file for Vite. Create a new file called: vite.config.mjs and inside add the following code:
    // vite.config.mjs export default { css: { transformer: 'lightningcss' }, build: { cssMinify: 'lightningcss' } }; Vite will now use LightningCSS to transform and minify CSS files. Now, let’s give it a test run using an oklch color. Inside main.css let’s add the following code:
    /* main.css */ body { background-color: oklch(51.98% 0.1768 142.5); } Then re-running the Vite build command, we can see the background-color property added in the compiled CSS:
    /* dist/index.css */ body { background-color: green; background-color: color(display-p3 0.216141 0.494224 0.131781); background-color: lab(46.2829% -47.5413 48.5542); } Lightning CSS converts the color white providing fallbacks available for browsers which might not support newer color types. Following the Lightning CSS documentation for using it with Vite, we can also specify browser versions to target by installing the browserslist package.
    Browserslist will give us a way to specify browsers by matching certain conditions (try it out online!)
    npm install -D browserslist Inside our vite.config.mjs file, we can configure Lightning CSS further. Let’s import the browserslist package into the Vite configuration, as well as a module from the Lightning CSS package to help us use browserlist in our config:
    // vite.config.mjs import browserslist from 'browserslist'; import { browserslistToTargets } from 'lightningcss'; We can add configuration settings for lightningcss, containing the browser targets based on specified browser versions to Vite’s css configuration:
    // vite.config.mjs import browserslist from 'browserslist'; import { browserslistToTargets } from 'lightningcss'; export default { css: { transformer: 'lightningcss', lightningcss: { targets: browserslistToTargets(browserslist('>= 0.25%')), } }, build: { cssMinify: 'lightningcss' } }; There are lots of ways to extend Lightning CSS with Vite, such as enabling specific features, excluding features we won’t need, or writing our own custom transforms.
    // vite.config.mjs import browserslist from 'browserslist'; import { browserslistToTargets, Features } from 'lightningcss'; export default { css: { transformer: 'lightningcss', lightningcss: { targets: browserslistToTargets(browserslist('>= 0.25%')), // Including `light-dark()` and `colors()` functions include: Features.LightDark | Features.Colors, } }, build: { cssMinify: 'lightningcss' } }; For a full list of the Lightning CSS features, check out their documentation on feature flags.
    Is any of this necessary?
    Reading through all this, you may be asking yourself if all of this is really necessary. The answer: absolutely not! But I think you can see the benefits of having access to partialized files that we can compile into unified stylesheets.
    I doubt I’d go to these lengths for smaller projects, however, if building something with more complexity, such as a design system, I might reach for these tools for organizing code, cross-browser compatibility, and thoroughly optimizing compiled CSS.
    Compiling CSS With Vite and Lightning CSS originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  4. by: Zainab Sutarwala
    Mon, 03 Feb 2025 14:38:00 +0000

    There was a time when the companies evaluated the performance of a software engineers based on how quickly they delivered the tasks.
    But, 2025 is a different scenario in software development teams.
    Nowadays, this isn’t the only criteria. Today called developers, professional software engineers are aware of the importance of soft skills.
    Things such as open-mindedness, creativity, and willingness to learn something new are considered soft skills that anyone can use no matter which industry you are in.
    Why Are the Soft Skills Very Important?
    For a software engineer, soft skills are essential as it makes you a better professional.
    Even though you have the hard skills required for the job, you will not get hired if you lack the soft skills to help you connect with the interviewer and other people around you.
    With soft skills, developers and programmers are well-equipped to utilize their complex skills to the fullest extent.
    The soft skills of engineers and programmers affect how nicely you work with fellow people on your tech team and other teams that will positively impact your career development.
    Tools like JavaScript Frameworks and APIs automate the most technical processes. An example is Filestack, which allows the creation of high-performance software that handles the management of millions of files.
    Manually adding video, image, or file management functions to an application reliably can be an impossible task without enough tools. The developer needs, in this case, to have more than technical skills to convince the business to invest in the tools.
    Top Soft Skills for Software Developers
    1. Creativity
    Creativity isn’t only about artistic expression. The technical professions demand a good amount of creativity.
    It allows goods software engineers and programmers to solve any complex problems and find new opportunities while developing their innovative products or apps and improving the current ones.
    You need to think creatively and practice approaching various problems correctly.
    2. Patience
    Software development isn’t a simple feat. It is one complex effort that includes long processes.
    Most activities take plenty of time in agile environments, whether it is a project kick-off, project execution, deployment, testing, updates, and more.
    Patience is vital when you’re starting as a developer or programmer. An important person you will ever need to be patient with is with you.
    It would be best to give yourself sufficient time to make mistakes and fix them.
    If you’re patient, it becomes simple to stay patient with people around you. At times people will require more convincing.
    You have to do your best to “sell” your idea and approach them.
    3. Communication
    Communication is a basis of collaboration, thus crucial to the software development project.
    Whether you’re communicating with colleagues, clients, and managers, do not leave anybody guessing –ensure every developer in the team is on the same page about any aspect of a project.
    Besides traditional skills of respect, assertiveness, active listening, empathy, and conflict resolution, the Software Engineers have to develop their mastery of explaining technical information very clearly to the non-techies.
    The professional needs to understand what somebody wants to ask if they do not understand the software’s specific parameters.
    4. Listenability
    These soft talents are intertwined: being a good communicator and a good listener is essential.
    Keep in mind that everybody you deal with or communicate with deserves to be heard, and they might have information that can help you do your work efficiently.
    Put other distractions apart and focus totally on a person talking to you.
    You must keep an eye out for the nonverbal communication indicators since they will disclose a lot about what somebody says.
    As per the experts in the field, 93% of the communication is nonverbal.
    Thus you must pay close attention to what the colleagues or clients say, even though they are not saying anything.
    5. Teamwork
    It doesn’t matter what you plan to do. There will be a time when you need to work as a part of the team.
    Whether it is the team of designers, developers, programmers, or project teams, developers have to work nicely with others to succeed.
    Working with the whole team makes working more fun and makes people help you out in the future.
    You might not sometimes agree with other people in the team. However, having a difference of opinion helps to build successful companies.
    6. People and Time Management
    Software development is about working on one project in the stipulated time frame.
    Usually, software developers engineers are highly involved in managing people and different projects. Thus, management is an essential soft skill for software developers.
    People and time management are two critical characteristics that the recruiter searches for in the software developer candidate.
    A software developer from various experience levels has to work well in the team and meet the time estimates.
    Thus, if you want to become a successful software programmer in a good company, it is essential to teach in the successful management of people and time.
    7. Problem-solving
    There will be a point in your career when you face the problem.
    Problems can happen regularly or rarely; however, it’s inevitable. The way you handle your situations will leave a massive impact on your career and the company that you are working for.
    Thus, problem-solving is an essential soft skill that employers search for in their prospective candidates; therefore, more examples of problem-solving you have better will be your prospects.
    When approaching the new problem, view it objectively, though you did accidentally.
    8. Adaptability
    Adaptability is another essential soft skill required in today’s fast-evolving world.
    This skill means being capable of changing with the changing environment and adjusting the course based on how this situation comes up.
    Employers value adaptability and can give you significant benefits in your career.
    9. Self-awareness
    Software developers must be highly confident in things they know and humble in something they don’t.
    So, knowing what area you require improvement is one type of true confidence.
    If software development is aware of their weaker sides, they will seek the proper training or mentorship from their colleagues and manager.
    In the majority of the cases, when most people deny that they do not know something, it is often a sign of insecurity.
    However, if the software developer finds himself secure and acknowledges their weaknesses, it is a sign of maturity that is one valuable skill to possess.
    In the same way, to be confident in things that they know is also very important.
    Self-confidence allows people to speak out their minds, make lesser mistakes, and face criticism.
    10. Open-mindedness
    If you are open-minded, you are keen to accept innovative ideas, whether they are yours or somebody else’s.
    Even any worst ideas will inspire something incredible if you consider them before you plan to dismiss them. More pictures you get, the more projects you will have the potential to work upon.
    Though not each idea you have may turn into something, you do not know what it will be until you have thought in-depth about it.
    It would help if you kept an open mind to new ideas from the team and your company and clients.
    Your clients are the people who will use your product; thus, they are the best ones to tell you about what works or what they require.
    Final Thoughts
    All the skills outlined here complement one another. A good skill will lead to higher collaboration & team cohesiveness.
    Knowing one’s strengths or weaknesses will improve your accountability skills. So, the result is a well-rounded software developer with solid potential.
    These soft skills mentioned in the article are the best input for a brilliant career since it gives several benefits.
    Suppose you wonder if you don’t have the given soft skills or find it very late to have it now.
    The best thing is that all these soft skills mentioned can quickly be learned or improved.
    In 2025, there will be a lot of resources available to help the developer with that. However, it is not very difficult to improve your soft skills. Better to start now.
    The post Top 10 Soft Skills for Software Developers in 2025 appeared first on The Crazy Programmer.
  5. Chrome 133 Goodies

    by: Geoff Graham
    Fri, 31 Jan 2025 15:27:50 +0000

    I often wonder what it’s like working for the Chrome team. You must get issued some sort of government-level security clearance for the latest browser builds that grants you permission to bash on them ahead of everyone else and come up with these rad demos showing off the latest features. No, I’m, not jealous, why are you asking?
    Totally unrelated, did you see the release notes for Chrome 133? It’s currently in beta, but the Chrome team has been publishing a slew of new articles with pretty incredible demos that are tough to ignore. I figured I’d round those up in one place.
    attr() for the masses!
    We’ve been able to use HTML attributes in CSS for some time now, but it’s been relegated to the content property and only parsed strings.
    <h1 data-color="orange">Some text</h1> h1::before { content: ' (Color: ' attr(data-color) ') '; } Bramus demonstrates how we can now use it on any CSS property, including custom properties, in Chrome 133. So, for example, we can take the attribute’s value and put it to use on the element’s color property:
    h1 { color: attr(data-color type(<color>), #fff) } This is a trite example, of course. But it helps illustrate that there are three moving pieces here:
    the attribute (data-color) the type (type(<color>)) the fallback value (#fff) We make up the attribute. It’s nice to have a wildcard we can insert into the markup and hook into for styling. The type() is a new deal that helps CSS know what sort of value it’s working with. If we had been working with a numeric value instead, we could ditch that in favor of something less verbose. For example, let’s say we’re using an attribute for the element’s font size:
    <div data-size="20">Some text</div> Now we can hook into the data-size attribute and use the assigned value to set the element’s font-size property, based in px units:
    h1 { color: attr(data-size px, 16); } CodePen Embed Fallback The fallback value is optional and might not be necessary depending on your use case.
    Scroll states in container queries!
    This is a mind-blowing one. If you’ve ever wanted a way to style a sticky element when it’s in a “stuck” state, then you already know how cool it is to have something like this. Adam Argyle takes the classic pattern of an alphabetical list and applies styles to the letter heading when it sticks to the top of the viewport. The same is true of elements with scroll snapping and elements that are scrolling containers.
    In other words, we can style elements when they are “stuck”, when they are “snapped”, and when they are “scrollable”.
    Quick little example that you’ll want to open in a Chromium browser:
    CodePen Embed Fallback The general idea (and that’s all I know for now) is that we register a container… you know, a container that we can query. We give that container a container-type that is set to the type of scrolling we’re working with. In this case, we’re working with sticky positioning where the element “sticks” to the top of the page.
    .sticky-nav { container-type: scroll-state; } A container can’t query itself, so that basically has to be a wrapper around the element we want to stick. Menus are a little funny because we have the <nav> element and usually stuff it with an unordered list of links. So, our <nav> can be the container we query since we’re effectively sticking an unordered list to the top of the page.
    <nav class="sticky-nav"> <ul> <li><a href="#">Home</a></li> <li><a href="#">About</a></li> <li><a href="#">Blog</a></li> </ul> </nav> We can put the sticky logic directly on the <nav> since it’s technically holding what gets stuck:
    .sticky-nav { container-type: scroll-state; /* set a scroll container query */ position: sticky; /* set sticky positioning */ top: 0; /* stick to the top of the page */ } I supposed we could use the container shorthand if we were working with multiple containers and needed to distinguish one from another with a container-name. Either way, now that we’ve defined a container, we can query it using @container! In this case, we declare the type of container we’re querying:
    @container scroll-state() { } And we tell it the state we’re looking for:
    @container scroll-state(stuck: top) { If we were working with a sticky footer instead of a menu, then we could say stuck: bottom instead. But the kicker is that once the <nav> element sticks to the top, we get to apply styles to it in the @container block, like so:
    .sticky-nav { border-radius: 12px; container-type: scroll-state; position: sticky; top: 0; /* When the nav is in a "stuck" state */ @container scroll-state(stuck: top) { border-radius: 0; box-shadow: 0 3px 10px hsl(0 0 0 / .25); width: 100%; } } It seems to work when nesting other selectors in there. So, for example, we can change the links in the menu when the navigation is in its stuck state:
    .sticky-nav { /* Same as before */ a { color: #000; font-size: 1rem; } /* When the nav is in a "stuck" state */ @container scroll-state(stuck: top) { /* Same as before */ a { color: orangered; font-size: 1.5rem; } } } So, yeah. As I was saying, it must be pretty cool to be on the Chrome developer team and get ahead of stuff like this, as it’s released. Big ol’ thanks to Bramus and Adam for consistently cluing us in on what’s new and doing the great work it takes to come up with such amazing demos to show things off.
    Chrome 133 Goodies originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  6. by: Geoff Graham
    Fri, 31 Jan 2025 14:11:00 +0000

    ::view-transition /* 👈 Captures all the clicks! */ └─ ::view-transition-group(root) └─ ::view-transition-image-pair(root) ├─ ::view-transition-old(root) └─ ::view-transition-new(root) The trick? It’s that sneaky little pointer-events property! Slapping it directly on the :view-transition allows us to click “under” the pseudo-element, meaning the full page is interactive even while the view transition is running.
    ::view-transition { pointer-events: none; } I always, always, always forget about pointer-events, so thanks to Bramus for posting this little snippet. I also appreciate the additional note about removing the :root element from participating in the view transition:
    :root { view-transition-name: none; } He quotes the spec noting the reason why snapshots do not respond to hit-testing:
    Keeping the page interactive while a View Transition is running originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  7. by: Abhishek Kumar
    Fri, 31 Jan 2025 17:03:02 +0530

    I’ve been using Cloudflare Tunnel for over a year, and while it’s great for hosting static HTML content securely, it has its limitations.
    For instance, if you’re running something like Jellyfin, you might run into issues with bandwidth limits, which can lead to account bans due to their terms of service.
    Cloudflare Tunnel is designed with lightweight use cases in mind, but what if you need something more robust and self-hosted?
    Let me introduce you to some fantastic open-source alternatives that can give you the freedom to host your services without restrictions.
    1. ngrok (OSS Edition)
    ngrok is a globally distributed reverse proxy designed to secure, protect, and accelerate your applications and network services, regardless of where they are hosted.
    Acting as the front door to your applications, ngrok integrates a reverse proxy, firewall, API gateway, and global load balancing into one seamless solution.
    Although the original open-source version of ngrok (v1) is no longer maintained, the platform continues to contribute to the open-source ecosystem with tools like Kubernetes operators and SDKs for popular programming languages such as Python, JavaScript, Go, Rust, and Java.
    Key features:
    Securely connect APIs and databases across networks without complex configurations. Expose local applications to the internet for demos and testing without deployment. Simplify development by inspecting and replaying HTTP callback requests. Implement advanced traffic policies like rate limiting and authentication with a global gateway-as-a-service. Control device APIs securely from the cloud using ngrok on IoT devices. Capture, inspect, and replay traffic to debug and optimize web services. Includes SDKs and integrations for popular programming languages to streamline workflows. ngrok2. frp (Fast Reverse Proxy)
    frp (Fast Reverse Proxy) is a high-performance tool designed to expose local servers located behind NAT or firewalls to the internet.
    Supporting protocols like TCP, UDP, HTTP, and HTTPS, frp enables seamless request forwarding to internal services via custom domain names.
    It also includes a peer-to-peer (P2P) connection mode for direct communication, making it a versatile solution for developers and system administrators.
    Key features:
    Expose local servers securely, even behind NAT or firewalls, using TCP, UDP, HTTP, or HTTPS protocols. Provide token and OIDC authentication for secure connections. Support advanced configurations such as encryption, compression, and TLS for enhanced security. Enable efficient traffic handling with features like TCP stream multiplexing, QUIC protocol support, and connection pooling. Facilitate monitoring and management through a server dashboard, client admin UI, and Prometheus integration. Offer flexible routing options, including URL routing, custom subdomains, and HTTP header rewriting. Implement load balancing and service health checks for reliable performance. Allow for port reuse, port range mapping, and bandwidth limits for granular control. Simplify SSH tunneling with a built-in SSH Tunnel Gateway. Fast reverse proxy3. localtunnel
    Localtunnel is an open-source, self-hosted tool that simplifies the process of exposing local web services to the internet.
    By creating a secure tunnel, Localtunnel allows developers to share their local resources without needing to configure DNS or firewall settings.
    It’s built on Node.js and can be easily installed using npm.
    While Localtunnel is straightforward and effective, the project hasn't seen active maintenance since 2022, and the default Localtunnel.me server's long-term reliability is uncertain.
    However, you can host your own Localtunnel server for better control and scalability.
    Key features
    Secure HTTPS for all tunnels, ensuring safe connections. Share your local development environment with a unique, publicly accessible URL. Test webhooks and external API callbacks with ease. Integrate with cloud-based browser testing tools for UI testing. Restart your local server seamlessly, as Localtunnel automatically reconnects. Request a custom subdomain or proxy to a hostname other than localhost for added flexibility. localtunnel4. boringproxy
    boringproxy is a reverse proxy and tunnel manager designed to simplify the process of securely exposing self-hosted web services to the internet.
    Whether you're running a personal website, Nextcloud, Jellyfin, or other services behind a NAT or firewall, boringproxy handles all the complexities, including HTTPS certificate management and NAT traversal, without requiring port forwarding or extensive configuration.
    It’s built with self-hosters in mind, offering a simple, fast, and secure solution for remote access.
    Key features
    100% free and open source under the MIT license, ensuring transparency and flexibility. No configuration files required—boringproxy works with sensible defaults and simple CLI parameters for easy adjustments. No need for port forwarding, NAT traversal, or firewall rule configuration, as boringproxy handles it all. End-to-end encryption with optional TLS termination at the server, client, or application, integrated seamlessly with Let's Encrypt. Fast web GUI for managing tunnels, which works great on both desktop and mobile browsers. Fully configurable through an HTTP API, allowing for automation and integration with other tools. Cross-platform support on Linux, Windows, Mac, and ARM devices (e.g., Raspberry Pi and Android). SSH support for those who prefer using a standard SSH client for tunnel management. boringproxy5. zrok
    zrok is a next-generation, peer-to-peer sharing platform built on OpenZiti, a programmable zero-trust network overlay.
    It enables users to share resources securely, both publicly and privately, without altering firewall or network configurations.
    Designed for technical users, zrok provides frictionless sharing of HTTP, TCP, and UDP resources, along with files and custom content.
    Share resources with non-zrok users over the public internet or directly with other zrok users in a peer-to-peer manner. Works seamlessly on Windows, macOS, and Linux systems. Start sharing within minutes using the zrok.io service. Download the binary, create an account, enable your environment, and share with a single command. Easily expose local resources like localhost:8080 to public users without compromising security. Share "network drives" publicly or privately and mount them on end-user systems for easy access. Integrate zrok’s sharing capabilities into your applications with the Go SDK, which supports net.Conn and net.Listener for familiar development workflows. Deploy zrok on a Raspberry Pi or scale it for large service instances. The single binary contains everything needed to operate your own zrok environment. Leverages OpenZiti’s zero-trust principles for secure and programmable network overlays. zrok6. Pagekite
    PageKite is a veteran in the tunneling space, providing HTTP(S) and TCP tunnels for more than 14 years. It offers features like IP whitelisting, password authentication, and supports custom domains.
    While the project is completely open-source and written in Python, the public service imposes limits, such as bandwidth caps, to prevent abuse.
    Users can unlock additional features and higher bandwidth through affordable payment plans.
    The free tier provides 2 GB of monthly transfer quota and supports custom domains, making it accessible for personal and small-scale use.
    Key features
    Enables any computer, such as a Raspberry Pi, laptop, or even old cell phones, to act as a server for hosting services like WordPress, Nextcloud, or Mastodon while keeping your home IP hidden. Provides simplified SSH access to mobile or virtual machines and ensures privacy by keeping firewall ports closed. Supports embedded developers with features like naming and accessing devices in the field, secure communications via TLS, and scaling solutions for both lightweight and large-scale deployments. Offers web developers the ability to test and debug work remotely, interact with secure APIs, and run webhooks, API servers, or Git repositories directly from their systems. Utilizes a global relay network to ensure low latency, high availability, and redundancy, with infrastructure managed since 2010. Ensures privacy by routing all traffic through its relays, hiding your IP address, and supporting both end-to-end and wildcard TLS encryption. Pagekite7. Chisel
    Chisel is a fast and efficient TCP/UDP tunneling tool transported over HTTP and secured using SSH.
    Written in Go (Golang), Chisel is designed to bypass firewalls and provide a secure endpoint into your network.
    It is distributed as a single executable that functions as both client and server, making it easy to set up and use.
    Key features
    Offers a simple setup process with a single executable for both client and server functionality. Secures connections using SSH encryption and supports authenticated client and server connections through user configuration files and fingerprint matching. Automatically reconnects clients with exponential backoff, ensuring reliability in unstable networks. Allows clients to create multiple tunnel endpoints over a single TCP connection, reducing overhead and complexity. Supports reverse port forwarding, enabling connections to pass through the server and exit via the client. Provides optional SOCKS5 support for both clients and servers, offering additional flexibility in routing traffic. Enables tunneling through SOCKS or HTTP CONNECT proxies and supports SSH over HTTP using ssh -o ProxyCommand. Performs efficiently, making it suitable for high-performance requirements. Chisel8. Telebit
    Telebit has quickly become one of my favorite tunneling tools, and it’s easy to see why. It's still fairly new but does a great job of getting things done.
    By installing Telebit Remote on any device, be it your laptop, Raspberry Pi, or another device, you can easily access it from anywhere.
    The magic happens thanks to a relay system that allows multiplexed incoming connections on any external port, making remote access a breeze.
    Not only that, but it also lets you share files and configure it like a VPN.
    Key features
    Share files securely between devices Access your Raspberry Pi or other devices from behind a firewall Use it like a VPN for additional privacy and control SSH over HTTPS, even on networks with restricted ports Simple setup with clear documentation and an installer script that handles everything Telebit9. tunnel.pyjam.as
    As a web developer, one of my favorite tools for quickly sharing projects with clients is tunnel.pyjam.as.
    It allows you to set up SSL-terminated, ephemeral HTTP tunnels to your local machine without needing to install any custom software, thanks to Wireguard.
    It’s perfect for when you want to quickly show someone a project you’re working on without the hassle of complex configurations.
    Key features
    No software installation required, thanks to Wireguard Quickly set up a reverse proxy to share your local services SSL-terminated tunnels for secure connections Simple to use with just a curl command to start and stop tunnels Ideal for quick demos or temporary access to local projects tunnel.pyjam.asFinal thoughts
    When it comes to tunneling tools, there’s no shortage of options and each of the projects we’ve discussed here offers something unique.
    Personally, I’m too deeply invested in Cloudflare Tunnel to stop using it anytime soon. It’s become a key part of my workflow, and I rely on it for many of my use cases.
    However, that doesn’t mean I won’t continue exploring these open-source alternatives. I’m always excited to see how they evolve.
    For instance, with tunnel.pyjam.as, I find it incredibly time-saving to simply edit the tunnel.conf file and run its WireGuard instance to quickly share my projects with clients.
    I’d love to hear what you think! Have you tried any of these open-source tunneling tools, or do you have your own favorites? Let me know in the comments.
  8. By: Janus Atienza
    Fri, 31 Jan 2025 00:11:21 +0000

    In today’s competitive digital landscape, small businesses need to leverage every tool and strategy available to stay relevant and grow. One such strategy is content marketing, which has proven to be an effective way to reach, engage, and convert potential customers. However, for many small business owners, managing content creation and distribution can be time-consuming and resource-intensive. This is where outsourcing content marketing services comes into play. Let’s explore why this approach is not only smart but also essential for the long-term success of small businesses.
    1. Expertise and Professional Quality
    Outsourcing content marketing services allows small businesses to tap into the expertise of professionals who specialize in content creation and marketing strategies. These experts are equipped with the skills, tools, and experience necessary to craft high-quality content that resonates with target audiences. Whether it’s blog posts, social media updates, or email newsletters, professional content marketers understand how to write compelling copy that engages readers and drives results. For Linux/Unix focused content, this might include experts who understand shell scripting for automation or using tools like grep for SEO analysis.
    In addition, they are well-versed in SEO best practices, which means they can optimize content to rank higher in search engines, ultimately driving more traffic to your website. This level of expertise is difficult to replicate in-house, especially for small businesses with limited resources.
    2. Cost Efficiency
    For many small businesses, hiring a full-time in-house marketing team may not be financially feasible. Content creation involves a range of tasks, from writing and editing to publishing and promoting. This can be a significant investment in terms of both time and money. By outsourcing content marketing services, small businesses can access the same level of expertise without the overhead costs associated with hiring additional employees. This can be especially true in the Linux/Unix world, where open-source tools can significantly reduce software costs.
    Outsourcing allows businesses to pay only for the services they need, whether it’s a one-off blog post or an ongoing content strategy. This flexibility can help businesses manage their budgets effectively while still benefiting from high-quality content marketing efforts.
    3. Focus on Core Business Functions
    Outsourcing content marketing services frees up time for small business owners and their teams to focus on core business functions. Small businesses often operate with limited personnel, and each member of the team is usually responsible for multiple tasks. When content marketing is outsourced, the business can concentrate on what it does best—whether that’s customer service, product development, or sales—without getting bogged down in the complexities of content creation. For example, a Linux system administrator can focus on server maintenance instead of writing blog posts.
    This improved focus on core operations can lead to better productivity and business growth, while the outsourced content team handles the strategy and execution of the marketing efforts.
    4. Consistency and Reliability
    One of the key challenges of content marketing is maintaining consistency. Inconsistent content delivery can confuse your audience and hurt your brand’s credibility. Outsourcing content marketing services ensures that content is consistently produced, published, and promoted according to a set schedule. Whether it’s weekly blog posts or daily social media updates, a professional team will adhere to a content calendar, ensuring that your business maintains a strong online presence. This can be further enhanced by using automation scripts (common in Linux/Unix environments) to schedule and distribute content.
    Consistency is crucial for building a loyal audience, and a reliable content marketing team will ensure that your business stays top-of-mind for potential customers.
    5. Access to Advanced Tools and Technologies
    Effective content marketing requires the use of various tools and technologies, from SEO and analytics platforms to content management systems and social media schedulers. Small businesses may not have the budget to invest in these tools or the time to learn how to use them effectively. Outsourcing content marketing services allows businesses to benefit from these advanced tools without having to make a significant investment. This could include access to specialized Linux-based SEO tools or experience with open-source CMS platforms like Drupal or WordPress.
    Professional content marketers have access to premium tools that can help with keyword research, content optimization, performance tracking, and more. These tools provide valuable insights that can inform future content strategies and improve the overall effectiveness of your marketing efforts.
    6. Scalability
    As small businesses grow, their content marketing needs will evolve. Outsourcing content marketing services provides the flexibility to scale efforts as necessary. Whether you’re launching a new product, expanding into new markets, or simply need more content to engage your growing audience, a content marketing agency can quickly adjust to your changing needs. This is especially relevant for Linux-based businesses that might experience rapid growth due to the open-source nature of their offerings.
    This scalability ensures that small businesses can maintain an effective content marketing strategy throughout their growth journey, without the need to continually hire or train new employees.
    Conclusion
    Outsourcing content marketing services is a smart move for small businesses looking to improve their online presence, engage with their target audience, and drive growth. By leveraging the expertise, cost efficiency, and scalability that outsourcing offers, small businesses can focus on what matters most—running their business—while leaving the content marketing to the professionals. Especially for businesses in the Linux/Unix ecosystem, this allows them to concentrate on technical development while expert marketers reach their specific audience. In a digital world where content is king, investing in high-quality content marketing services can make all the difference.
    The post Content Marketing for Linux/Unix Businesses: Why Outsourcing Makes Sense appeared first on Unixmen.
  9. The Mistakes of CSS

    by: Juan Diego Rodríguez
    Thu, 30 Jan 2025 14:31:08 +0000

    Surely you have seen a CSS property and thought “Why?” For example:
    Or:
    You are not alone. CSS was born in 1996 (it can legally order a beer, you know!) and was initially considered a way to style documents; I don’t think anyone imagined everything CSS would be expected to do nearly 30 years later. If we had a time machine, many things would be done differently to match conventions or to make more sense. Heck, even the CSS Working Group admits to wanting a time-traveling contraption… in the specifications!
    If by some stroke of opportunity, I was given free rein to rename some things in CSS, a couple of ideas come to mind, but if you want more, you can find an ongoing list of mistakes made in CSS… by the CSS Working Group! Take, for example, background-repeat:
    Why not fix them? Sadly, it isn’t as easy as fixing something. People already built their websites with these quirks in mind, and changing them would break those sites. Consider it technical debt.
    This is why I think the CSS Working Group deserves an onslaught of praise. Designing new features that are immutable once shipped has to be a nerve-wracking experience that involves inexact science. It’s not that we haven’t seen the specifications change or evolve in the past — they most certainly have — but the value of getting things right the first time is a beast of burden.
    The Mistakes of CSS originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  10. by: Abhishek Prakash
    Wed, 29 Jan 2025 20:04:25 +0530

    What's in a name? Sometimes the name can be deceptive.
    For example, in the Linux Tips and Tutorials section of this newsletter, I am sharing a few commands that have nothing to do with what their name indicates 😄
    Here are the other highlights of this edition of LHB Linux Digest:
    Nice and renice commands ReplicaSet in Kubernetes Self hosted code snippet manager And more tools, tips and memes for you This edition of LHB Linux Digest newsletter is supported by RELIANOID. ❇️Comprehensive Load Balancing Solutions For Modern Networks
    RELIANOID’s load balancing solutions combine the power of SD-WAN, secure application delivery, and elastic load balancing to optimize traffic distribution and ensure unparalleled performance.
    With features like a robust Web Application Firewall (WAF) and built-in DDoS protection, your applications remain secure and resilient against cyber threats. High availability ensures uninterrupted access, while open networking and user experience networking enhance flexibility and deliver a seamless experience across all environments, from on-premises to cloud.
    Free Load Balancer Download | Community Edition by RELIANOIDDiscover our Free Load Balancer | Community Edition | The best Open Source Load Balancing software for providing high availability and content switching servicesRELIANOIDAdmin📖 Linux Tips and Tutorials
    There is an install command in Linux but it doesn't install anything There is a hash command in Linux but it doesn't have anything to do with hash-password There is a tree command in Linux but it has nothing to do with plants There is a wc command in Linux and it has nothing to do with washrooms 🚻 (I understand that you know what wc stands for in the command but I still find it amusing) Using nice and renice commands to change process priority.
    Change Process Priority WIth nice and renice CommandsYou can modify if a certain process should get priority in consuming CPU with nice and renice commands.Linux HandbookHelder  
     
      This post is for subscribers only
    Subscribe now Already have an account? Sign in
  11. by: Juan Diego Rodríguez
    Wed, 29 Jan 2025 14:13:53 +0000

    Have you ever stumbled upon something new and went to research it just to find that there is little-to-no information about it? It’s a mixed feeling: confusing and discouraging because there is no apparent direction, but also exciting because it’s probably new to lots of people, not just you. Something like that happened to me while writing an Almanac’s entry for the @view-transition at-rule and its types descriptor.
    You may already know about Cross-Document View Transitions: With a few lines of CSS, they allow for transitions between two pages, something that in the past required a single-app framework with a side of animation library. In other words, lots of JavaScript.
    To start a transition between two pages, we have to set the @view-transition at-rule’s navigation descriptor to auto on both pages, and that gives us a smooth cross-fade transition between the two pages. So, as the old page fades out, the new page fades in.
    @view-transition { navigation: auto; } That’s it! And navigation is the only descriptor we need. In fact, it’s the only descriptor available for the @view-transition at-rule, right? Well, turns out there is another descriptor, a lesser-known brother, and one that probably envies how much attention navigation gets: the types descriptor.
    What do people say about types?
    Cross-Documents View Transitions are still fresh from the oven, so it’s normal that people haven’t fully dissected every aspect of them, especially since they introduce a lot of new stuff: a new at-rule, a couple of new properties and tons of pseudo-elements and pseudo-classes. However, it still surprises me the little mention of types. Some documentation fails to even name it among the valid  @view-transition descriptors. Luckily, though, the CSS specification does offer a little clarification about it:
    To be more precise, types can take a space-separated list with the names of the active types (as <custom-ident>), or none if there aren’t valid active types for that page.
    Name: types For: @view-transition Value: none | <custom-ident>+ Initial: none So the following values would work inside types:
    @view-transition { navigation: auto; types: bounce; } /* or a list */ @view-transition { navigation: auto; types: bounce fade rotate; } Yes, but what exactly are “active” types? That word “active” seems to be doing a lot of heavy lifting in the CSS specification’s definition and I want to unpack that to better understand what it means.
    Active types in view transitions
    The problem: A cross-fade animation for every page is good, but a common thing we need to do is change the transition depending on the pages we are navigating between. For example, on paginated content, we could slide the content to the right when navigating forward and to the left when navigating backward. In a social media app, clicking a user’s profile picture could persist the picture throughout the transition. All this would mean defining several transitions in our CSS, but doing so would make them conflict with each other in one big slop. What we need is a way to define several transitions, but only pick one depending on how the user navigates the page.
    The solution: Active types define which transition gets used and which elements should be included in it. In CSS, they are used through :active-view-transition-type(), a pseudo-class that matches an element if it has a specific active type. Going back to our last example, we defined the document’s active type as bounce. We could enclose that bounce animation behind an :active-view-transition-type(bounce), such that it only triggers on that page.
    /* This one will be used! */ html:active-view-transition-type(bounce) { &::view-transition-old(page) { /* Custom Animation */ } &::view-transition-new(page) { /* Custom Animation */ } } This prevents other view transitions from running if they don’t match any active type:
    /* This one won't be used! */ html:active-view-transition-type(slide) { &::view-transition-old(page) { /* Custom Animation */ } &::view-transition-new(page) { /* Custom Animation */ } } I asked myself whether this triggers the transition when going to the page, when out of the page, or in both instances. Turns out it only limits the transition when going to the page, so the last bounce animation is only triggered when navigating toward a page with a bounce value on its types descriptor, but not when leaving that page. This allows for custom transitions depending on which page we are going to.
    The following demo has two pages that share a stylesheet with the bounce and slide view transitions, both respectively enclosed behind an :active-view-transition-type(bounce) and :active-view-transition-type(slide) like the last example. We can control which page uses which view transition through the types descriptor.
    The first page uses the bounce animation:
    @view-transition { navigation: auto; types: bounce; } The second page uses the slide animation:
    @view-transition { navigation: auto; types: slide; } You can visit the demo here and see the full code over at GitHub.
    The types descriptor is used more in JavaScript
    The main problem is that we can only control the transition depending on the page we’re navigating to, which puts a major cap on how much we can customize our transitions. For instance, the pagination and social media examples we looked at aren’t possible just using CSS, since we need to know where the user is coming from. Luckily, using the types descriptor is just one of three ways that active types can be populated. Per spec, they can be:
    Passed as part of the arguments to startViewTransition(callbackOptions) Mutated at any time, using the transition’s types Declared for a cross-document view transition, using the types descriptor. The first option is when starting a view transition from JavaScript, but we want to trigger them when the user navigates to the page by themselves (like when clicking a link). The third option is using the types descriptor which we already covered. The second option is the right one for this case! Why? It lets us set the active transition type on demand, and we can perform that change just before the transition happens using the pagereveal event. That means we can get the user’s start and end page from JavaScript and then set the correct active type for that case.
    I must admit, I am not the most experienced guy to talk about this option, so once I demo the heck out of different transitions with active types I’ll come back with my findings! In the meantime, I encourage you to read about active types here if you are like me and want more on view transitions:
    View transition types in cross-document view transitions (Bramus) Customize the direction of a view transition with JavaScript (Umar Hansa) What on Earth is the `types` Descriptor in View Transitions? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  12. by: LHB Community
    Wed, 29 Jan 2025 18:26:26 +0530

    Kubernetes is a powerful container orchestration platform that enables developers to manage and deploy containerized applications with ease. One of its key components is the ReplicaSet, which plays a critical role in ensuring high availability and scalability of applications.
    In this guide, we will explore the ReplicaSet, its purpose, and how to create and manage it effectively in your Kubernetes environment.
    What is a ReplicaSet in Kubernetes?
    A ReplicaSet in Kubernetes is a higher-level abstraction that ensures a specified number of pod replicas are running at all times. If a pod crashes or becomes unresponsive, the ReplicaSet automatically creates a new pod to maintain the desired state. This guarantees high availability and resilience for your applications.
    The key purposes of a ReplicaSet include:
    Scaling Pods: ReplicaSets manage the replication of pods, ensuring the desired number of replicas are always running. High Availability: Ensures that your application remains available even if one or more pods fail. Self-Healing: Automatically replaces failed pods to maintain the desired state. Efficient Workload Management: Helps distribute workloads across nodes in the cluster. How Does a ReplicaSet Work?
    ReplicaSets rely on selectors to match pods using labels. It uses these selectors to monitor the pods and ensures the actual number of pods matches the specified replica count. If the number is less than the desired count, new pods are created. If it’s greater, excess pods are terminated.
    Creating a ReplicaSet
    To create a ReplicaSet, you define its configuration in a YAML file. Here’s an example:
    Example YAML Configuration
    apiVersion: apps/v1 kind: ReplicaSet metadata: name: nginx-replicaset labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.21 ports: - containerPort: 80 In this YAML file:
    replicas: Specifies the desired number of pod replicas. selector: Matches pods with the label app=nginx. template: Defines the pod’s specifications, including the container image and port. Deploying a ReplicaSet
    Once you have the YAML file ready, follow these steps to deploy it in your Kubernetes cluster.
    Apply the YAML configuration to create the ReplicaSet:
    kubectl apply -f nginx-replicaset.yaml Verify that the ReplicaSet was created and the pods are running:
    kubectl get replicaset Output:
    NAME DESIRED CURRENT READY AGE nginx-replicaset 3 3 3 5s View the pods to check the pods created by the ReplicaSet:
    kubectl get pods Output:
    NAME READY STATUS RESTARTS AGE nginx-replicaset-xyz12 1/1 Running 0 10s nginx-replicaset-abc34 1/1 Running 0 10s nginx-replicaset-lmn56 1/1 Running 0 10s Scaling a ReplicaSet
    You can easily scale the number of replicas in a ReplicaSet. For example, to scale the above ReplicaSet to 5 replicas:
    kubectl scale replicaset nginx-replicaset --replicas=5 Verify the updated state:
    kubectl get replicaset Output:
    NAME DESIRED CURRENT READY AGE nginx-replicaset 5 5 5 2m Learn Kubernetes OperatorLearn to build, test and deploy Kubernetes Opeartor using Kubebuilder as well as Operator SDK in this course.Linux HandbookTeam LHBConclusion
    A ReplicaSet is an essential component of Kubernetes, ensuring the desired number of pod replicas are running at all times. By leveraging ReplicaSets, you can achieve high availability, scalability, and self-healing for your applications with ease.
    Whether you’re managing a small application or a large-scale deployment, understanding ReplicaSets is crucial for effective workload management.
    ✍️Author: Hitesh Jethwa has more than 15+ years of experience with Linux system administration and DevOps. He likes to explain complicated topics in easy to understand way.
  13. by: Satoshi Nakamoto
    Wed, 29 Jan 2025 16:53:22 +0530

    A few years ago, we witnessed a shift to containers and in current times, I believe containers have become an integral part of the IT infrastructure for most companies.
    Traditional monitoring tools often fall short in providing the visibility needed to ensure performance, security, and reliability.
    According to my experience, monitoring resource allocation is the most important part of deploying containers and that is why I found the top container monitoring solutions offering real-time insights into your containerized environments.
    Top Container Monitoring Solutions
    Before I jump into details, here's a brief of all the tools which I'll be discussing in a moment:
    Tool Pricing & Plans Free Tier? Key Free Tier Features Key Paid Plan Features Middleware Free up to 100GB; pay-as-you-go at $0.3/GB; custom enterprise plans Yes Up to 100GB data, 1k RUM sessions, 20k synthetic checks, 14-day retention Unlimited data volume; data pipeline & ingestion control; single sign-on; dedicated support Datadog Free plan (limited hosts & 1-day metric retention); Pro starts at $15/host/month; Enterprise from $23 Yes Basic infrastructure monitoring for up to 5 hosts; limited metric retention Extended retention, advanced anomaly detection, over 750 integrations, multi-cloud support Prometheus & Grafana Open-source; no licensing costs Yes Full-featured metrics collection (Prometheus), custom dashboards (Grafana) Self-managed support only; optional managed services through third-party providers Dynatrace 15-day free trial; usage-based: $0.04/hour for infrastructure-only, $0.08/hour for full-stack Trial Only N/A (trial only) AI-driven root cause analysis, automatic topology discovery, enterprise support, multi-cloud observability Sematext Free plan (Basic) with limited container monitoring; paid plans start at $0.007/container/hour Yes Live metrics for a small number of containers, 30-minute retention, limited alert rules Increased container limits, extended retention, unlimited alert rules, full-stack monitoring Sysdig Free tier; Sysdig Monitor starts at $20/host/month; Sysdig Secure is $60/host/month Yes Basic container monitoring, limited metrics and retention Advanced threat detection, vulnerability management, compliance checks, Prometheus support SolarWinds No permanent free plan; pricing varies by module (starts around $27.50/month or $2995 single license) Trial Only N/A (trial only) Pre-built Docker templates, application-centric mapping, hardware health, synthetic monitoring Splunk Observability Cloud starts at $15/host/month (annual billing); free trial available Trial Only N/A (trial only) Real-time log and metrics analysis, AI-based anomaly detection, multi-cloud integrations, advanced alerting MetricFire Paid plans start at $19/month; free trial offered Trial Only N/A (trial only) Integration with Graphite and Prometheus, customizable dashboards, real-time alerts SigNoz Open-source (self-hosted) or custom paid support Yes Full observability stack (metrics, traces, logs) with no licensing costs Commercial support, managed hosting services, extended retention options Here, "N/A (trial only)" means that the tool does not offer a permanent free tier but provides a limited-time free trial for users to test its features. After the trial period ends, users must subscribe to a paid plan to continue using the tool. Essentially, there is no free version available for long-term use—only a temporary trial.
    1. Middleware
    Middleware is an excellent choice for teams looking for a free or scalable container monitoring solution. It provides pre-configured dashboards for Kubernetes environments and real-time visibility into container health.
    With a free tier supporting up to 100GB of data and a pay-as-you-go model at $0.3/GB thereafter, it’s ideal for startups or small teams.
    Key features:
    Pre-configured dashboards for Kubernetes Real-time metrics tracking Alerts for critical events Correlation of metrics with logs and traces Pros:
    Free tier available Easy setup with minimal configuration Scalable pricing model Cons:
    Limited advanced features compared to premium tools Try Middleware2. Datadog
    Datadog is a premium solution offering observability across infrastructure, applications, and logs. Its auto-discovery feature makes it particularly suited for dynamic containerized environments.
    The free plan supports up to five hosts with limited retention. Paid plans start at $15 per host per month.
    Key features:
    Real-time performance tracking Anomaly detection using ML Auto-discovery of new containers Distributed tracing and APM Pros:
    Extensive integrations (750+) User-friendly interface Advanced visualization tools Cons:
    High cost for small teams Pricing can vary based on usage spikes Try Datadog3. Prometheus & Grafana
    This open-source duo provides powerful monitoring and visualization capabilities. Prometheus has an edge in metrics collection with its PromQL query language, while Grafana offers stunning visualizations.
    This eventually makes it perfect for teams seeking customization without licensing costs.
    Key features:
    Time-series data collection Flexible query language (PromQL) Customizable dashboards Integrated alerting system Pros:
    Free to use Highly customizable Strong community support Cons:
    Requires significant setup effort Limited out-of-the-box functionality Try Prometheus & Grafana4. Dynatrace
    Dynatrace is an AI-powered observability platform designed for large-scale hybrid environments. It automates topology discovery and offers you deep insights into containerized workloads. Pricing starts at $0.04/hour for infrastructure-only monitoring.
    Key features:
    AI-powered root cause analysis Automatic topology mapping Real-user monitoring Cloud-native support (Kubernetes/OpenShift) Pros:
    Automated configuration Scalability for large environments End-to-end visibility Cons:
    Expensive for smaller teams Proprietary platform limits flexibility Try Dynatrace5. Sematext
    Sematext is a lightweight tool that allows users to monitor metrics and logs across Docker and Kubernetes platforms. Its free plan supports basic container monitoring with limited retention and alerting rules. Paid plans start at just $0.007/container/hour.
    Key features:
    Unified dashboard for logs and metrics Real-time insights into containers and hosts Auto-discovery of new containers Anomaly detection and alerting Pros:
    Affordable pricing plans Simple setup process Full-stack observability features Cons:
    Limited advanced features compared to premium tools Try Sematext7. SolarWinds
    SolarWinds offers an intuitive solution for SMBs needing straightforward container monitoring. While it doesn’t offer a permanent free plan, its pricing starts at around $27.50/month or $2995 as a one-time license fee.
    Key features:
    Pre-built Docker templates Application-centric performance tracking Hardware health monitoring Dependency mapping Pros:
    Easy deployment and setup Out-of-the-box templates Suitable for smaller teams Cons:
    Limited flexibility compared to open-source tools Try SolarWinds8. Splunk
    Splunk not only provides log analysis but also provides strong container monitoring capabilities through its Observability Cloud suite. Pricing starts at $15/host/month on annual billing.
    Key features:
    Real-time log and metrics analysis AI-based anomaly detection Customizable dashboards and alerts Integration with OpenTelemetry standards Pros:
    Powerful search capabilities Scalable architecture Extensive integrations Cons:
    High licensing costs for large-scale deployments Try Splunk9. MetricFire
    It simplifies container monitoring by offering customizable dashboards and seamless integration with Kubernetes and Docker. MetricFire is ideal for teams looking for a reliable hosted solution without the hassle of managing infrastructure. Pricing starts at $19/month.
    Key features:
    Hosted Graphite and Grafana dashboards Real-time performance metrics Integration with Kubernetes and Docker Customizable alerting systems Pros:
    Easy setup and configuration Scales effortlessly as metrics grow Transparent pricing model Strong community support Cons:
    Limited advanced features compared to proprietary tools Requires technical expertise for full customization Try MetricFire10. SigNoz
    SigNoz is an open-source alternative to proprietary APM tools like Datadog and New Relic. It offers a platform for logs, metrics, and traces under one interface.
    With native OpenTelemetry support and a focus on distributed tracing for microservices architectures, SigNoz is perfect for organizations seeking cost-effective yet powerful observability solutions.
    Key features:
    Distributed tracing for microservices Real-time metrics collection Centralized log management Customizable dashboards Native OpenTelemetry support Pros:
    Completely free if self-hosted Active development community Cost-effective managed cloud option Comprehensive observability stack Cons:
    Requires infrastructure setup if self-hosted Limited enterprise-level support compared to proprietary tools Try SigNozEvaluate your infrastructure complexity and budget to select the best tool that aligns with your goals!
  14. By: Janus Atienza
    Tue, 28 Jan 2025 23:16:45 +0000


    As a digital marketing agency, your focus is to provide high-quality services to your clients while ensuring that operations run smoothly. However, managing the various components of SEO, such as link-building, can be time-consuming and resource-draining. This is where white-label link-building services come into play. By outsourcing your link-building efforts, you can save time and resources, allowing your agency to focus on more strategic tasks that directly contribute to your clients’ success. Below, we’ll explore how these services can benefit your agency in terms of time and resource management.
    Focus on Core Competencies
    When you choose to outsource your link-building efforts to a white-label service, it allows your agency to focus on your core competencies. As an agency, you may excel in content strategy, social media marketing, or paid advertising. However, link-building requires specialized knowledge, experience, and resources. A white-label link-building service can handle this aspect of SEO for you, freeing up time for your team to focus on what they do best. This way, you can maintain a high level of performance in other areas without spreading your team too thin.
    Eliminate the Need for Specialized Staff
    Building a successful link-building strategy requires expertise, which may not be available within your existing team. Hiring specialized staff to manage outreach campaigns, content creation, and link placements can be expensive and time-consuming. However, white-label link-building services already have the necessary expertise and resources in place. You won’t need to hire or train new employees to handle this aspect of SEO. The service provider’s team can execute campaigns quickly and effectively, allowing your agency to scale without expanding its internal workforce.
    Access to Established Relationships and Networks
    Link-building is not just about placing links on any website; it’s about building relationships with authoritative websites in your client’s industry, especially within relevant open-source projects and Linux communities. This process takes time to establish and requires continuous effort. A white-label link-building service typically has established relationships with high-authority websites, bloggers, and influencers across various industries. By leveraging these networks, they can secure quality backlinks faster and more efficiently than your agency could on its own. This reduces the time spent on outreach and relationship-building, ensuring that your client’s SEO efforts are moving forward without delays. For Linux-focused sites, this can include participation in relevant forums and contributing to open-source projects.
    Efficient Campaign Execution
    White-label link-building services are designed to execute campaigns efficiently. These agencies have streamlined processes and advanced tools that allow them to scale campaigns while maintaining quality. They can manage multiple campaigns at once, ensuring that your clients’ link-building needs are met in a timely manner. By outsourcing to a provider with a proven workflow, you can avoid the inefficiencies associated with trying to build an in-house link-building team. This leads to faster execution, better results, and more satisfied clients.
    Cost-Effectiveness
    Managing link-building in-house can be costly. Aside from the salaries and benefits of hiring staff, you’ll also need to invest in tools, software, and outreach efforts. White-label link-building services, on the other hand, offer more cost-effective solutions. These providers typically offer packages that include all necessary tools, such as backlink analysis software, outreach platforms, and reporting tools, which can be expensive to purchase and maintain on your own. By outsourcing, you save money on infrastructure and overhead costs, all while getting access to the best tools available.
    Reduce Time Spent on Reporting and Analysis
    Effective link-building campaigns require consistent monitoring, analysis, and reporting. Generating reports, tracking backlink quality, and assessing the impact of links on search rankings can be time-consuming tasks. When you outsource this responsibility to a white-label link-building service, they will handle reporting on your behalf. The provider will deliver customized reports that highlight key metrics like the number of backlinks acquired, domain authority, traffic increases, and overall SEO performance. This allows you to deliver the necessary information to your clients while saving time on report generation and analysis. For Linux-based servers, this can also involve analyzing server logs for SEO-related issues.
    Scalability and Flexibility
    As your agency grows, so does the demand for SEO services. One of the challenges agencies face is scaling their link-building efforts to accommodate more clients or larger campaigns. A white-label link-building service offers scalability and flexibility, meaning that as your client base grows, the provider can handle an increased volume of link-building efforts without compromising on quality. Whether you’re managing a single campaign or hundreds of clients, a reliable white-label service can adjust to your needs and ensure that every client receives the attention their SEO efforts deserve.
    Mitigate Risks Associated with Link-Building
    Link-building, if not done properly, can result in penalties from search engines, harming your client’s SEO performance. Managing link-building campaigns in-house without proper knowledge of SEO best practices can lead to mistakes, such as acquiring low-quality or irrelevant backlinks. White-label link-building services are experts in following search engine guidelines and using ethical link-building practices. By outsourcing, you reduce the risk of penalties, ensuring that your clients’ SEO efforts are safe and aligned with best practices.
    Stay Up-to-Date with SEO Trends
    SEO is an ever-evolving field, and staying up-to-date with the latest trends and algorithm updates can be a full-time job. White-label link-building services are dedicated to staying current with industry changes. By outsourcing your link-building efforts, you can be sure that the provider is implementing the latest techniques and best practices in their campaigns. This ensures that your client’s link-building strategies are always aligned with search engine updates, maximizing their chances of success. This includes familiarity with SEO tools that run on Linux, such as command-line tools and open-source crawlers, and understanding the nuances of optimizing websites hosted on Linux servers.
    Conclusion
    White-label link-building services offer significant time and resource savings for digital marketing agencies. By outsourcing link-building efforts, your agency can focus on core business areas, eliminate the need for specialized in-house staff, and streamline campaign execution. The cost-effectiveness and scalability of these services also make them an attractive option for agencies looking to grow their SEO offerings without overextending their resources. Especially for clients using Linux-based infrastructure, leveraging a white-label service with expertise in this area can be a significant advantage. With a trusted white-label link-building partner, you can deliver high-quality backlinks to your clients, improve their SEO rankings, and drive long-term success.
    The post White-Label Link Building for Linux-Based Websites: Saving Time and Resources appeared first on Unixmen.
  15. Sigma Browser

    by: aiparabellum.com
    Tue, 28 Jan 2025 07:28:06 +0000

    In the digital age, where online privacy and security are paramount, tools like Sigma Browser are gaining significant attention. Sigma Browser is a privacy-focused web browser designed to provide users with a secure, fast, and ad-free browsing experience. Built with advanced features to protect user data and enhance online anonymity, Sigma Browser is an excellent choice for individuals and businesses alike. In this article, we’ll dive into its features, how it works, benefits, pricing, and more to help you understand why Sigma Browser is a standout in the world of secure browsing.
    Features of Sigma Browser AI
    Sigma Browser offers a range of features tailored to ensure privacy, speed, and convenience. Here are some of its key features:
    Ad-Free Browsing: Enjoy a seamless browsing experience without intrusive ads. Enhanced Privacy: Built-in privacy tools to block trackers and protect your data. Fast Performance: Optimized for speed, ensuring quick page loads and smooth navigation. Customizable Interface: Personalize your browsing experience with themes and settings. Cross-Platform Sync: Sync your data across multiple devices for a unified experience. Secure Browsing: Advanced encryption to keep your online activities private. How It Works
    Sigma Browser is designed to be user-friendly while prioritizing security. Here’s how it works:
    Download and Install: Simply download Sigma Browser from its official website and install it on your device. Set Up Privacy Settings: Customize your privacy preferences, such as blocking trackers and enabling encryption. Browse Securely: Start browsing the web with enhanced privacy and no ads. Sync Across Devices: Log in to your account to sync bookmarks, history, and settings across multiple devices. Regular Updates: The browser receives frequent updates to improve performance and security. Benefits of Sigma Browser AI
    Using Sigma Browser comes with numerous advantages:
    Improved Privacy: Protects your data from third-party trackers and advertisers. Faster Browsing: Eliminates ads and optimizes performance for quicker loading times. User-Friendly: Easy to set up and use, even for non-tech-savvy individuals. Cross-Device Compatibility: Access your browsing data on any device. Customization: Tailor the browser to suit your preferences and needs. Pricing
    Sigma Browser offers flexible pricing plans to cater to different users:
    Free Version: Includes basic features like ad-free browsing and privacy protection. Premium Plan: Unlocks advanced features such as cross-device sync and priority support. Pricing details are available on the official website. Sigma Browser Review
    Sigma Browser has received positive feedback from users for its focus on privacy and performance. Many appreciate its ad-free experience and the ability to customize the interface. The cross-platform sync feature is also a standout, making it a convenient choice for users who switch between devices. Some users have noted that the premium plan could offer more features, but overall, Sigma Browser is highly regarded for its security and ease of use.
    Conclusion
    Sigma Browser is a powerful tool for anyone looking to enhance their online privacy and browsing experience. With its ad-free interface, robust privacy features, and fast performance, it stands out as a reliable choice in the crowded browser market. Whether you’re a casual user or a business professional, Sigma Browser offers the tools you need to browse securely and efficiently. Give it a try and experience the difference for yourself.
    Visit Website The post Sigma Browser appeared first on AI Parabellum.
  16. Sigma Browser

    by: aiparabellum.com
    Tue, 28 Jan 2025 07:28:06 +0000

    In the digital age, where online privacy and security are paramount, tools like Sigma Browser are gaining significant attention. Sigma Browser is a privacy-focused web browser designed to provide users with a secure, fast, and ad-free browsing experience. Built with advanced features to protect user data and enhance online anonymity, Sigma Browser is an excellent choice for individuals and businesses alike. In this article, we’ll dive into its features, how it works, benefits, pricing, and more to help you understand why Sigma Browser is a standout in the world of secure browsing.
    Features of Sigma Browser AI
    Sigma Browser offers a range of features tailored to ensure privacy, speed, and convenience. Here are some of its key features:
    Ad-Free Browsing: Enjoy a seamless browsing experience without intrusive ads. Enhanced Privacy: Built-in privacy tools to block trackers and protect your data. Fast Performance: Optimized for speed, ensuring quick page loads and smooth navigation. Customizable Interface: Personalize your browsing experience with themes and settings. Cross-Platform Sync: Sync your data across multiple devices for a unified experience. Secure Browsing: Advanced encryption to keep your online activities private. How It Works
    Sigma Browser is designed to be user-friendly while prioritizing security. Here’s how it works:
    Download and Install: Simply download Sigma Browser from its official website and install it on your device. Set Up Privacy Settings: Customize your privacy preferences, such as blocking trackers and enabling encryption. Browse Securely: Start browsing the web with enhanced privacy and no ads. Sync Across Devices: Log in to your account to sync bookmarks, history, and settings across multiple devices. Regular Updates: The browser receives frequent updates to improve performance and security. Benefits of Sigma Browser AI
    Using Sigma Browser comes with numerous advantages:
    Improved Privacy: Protects your data from third-party trackers and advertisers. Faster Browsing: Eliminates ads and optimizes performance for quicker loading times. User-Friendly: Easy to set up and use, even for non-tech-savvy individuals. Cross-Device Compatibility: Access your browsing data on any device. Customization: Tailor the browser to suit your preferences and needs. Pricing
    Sigma Browser offers flexible pricing plans to cater to different users:
    Free Version: Includes basic features like ad-free browsing and privacy protection. Premium Plan: Unlocks advanced features such as cross-device sync and priority support. Pricing details are available on the official website. Sigma Browser Review
    Sigma Browser has received positive feedback from users for its focus on privacy and performance. Many appreciate its ad-free experience and the ability to customize the interface. The cross-platform sync feature is also a standout, making it a convenient choice for users who switch between devices. Some users have noted that the premium plan could offer more features, but overall, Sigma Browser is highly regarded for its security and ease of use.
    Conclusion
    Sigma Browser is a powerful tool for anyone looking to enhance their online privacy and browsing experience. With its ad-free interface, robust privacy features, and fast performance, it stands out as a reliable choice in the crowded browser market. Whether you’re a casual user or a business professional, Sigma Browser offers the tools you need to browse securely and efficiently. Give it a try and experience the difference for yourself.
    Visit Website The post Sigma Browser appeared first on AI Parabellum.
  17. by: Chris Coyier
    Mon, 27 Jan 2025 17:10:10 +0000

    I love a good exposé on how a front-end team operates. Like what technology they use, why, and how, particularly when there are pain points and journeys through them.
    Jim Simon of Reddit wrote one a bit ago about their teams build process. They were using something Rollup based and getting 2-minute build times and spent quite a bit of time and effort switching to Vite and now are getting sub-1-second build times. I don’t know if “wow Vite is fast” is the right read here though, as they lost type checking entirely. Vite means esbuild for TypeScript which just strips types, meaning no build process (locally, in CI, or otherwise) will catch errors. That seems like a massive deal to me as it opens the door to all contributions having TypeScript errors. I admit I’m fascinated by the approach though, it’s kinda like treating TypeScript as a local-only linter. Sure, VS Code complains and gives you red squiggles, but nothing else will, so use that information as you will. Very mixed feelings.
    Vite always seems to be front and center in conversations about the JavaScript ecosystem these days. The tooling section of this year’s JavaScript Rising Stars:
    (Interesting how it’s actually Biome that gained the most stars this year and has large goals about being the toolchain for the web, like Vite)
    Vite actually has the bucks now to make a real run at it. It’s always nail biting and fascinating to see money being thrown around at front-end open source, as a strong business model around all that is hard to find.
    Maybe there is an enterprise story to capture? Somehow I can see that more easily. I would guess that’s where the new venture vlt is seeing potential. npm, now being owned by Microsoft, certainly had a story there that investors probably liked to see, so maybe vlt can do it again but better. It’s the “you’ve got their data” thing that adds up to me. Not that I love it, I just get it. Vite might have your stack, but we write checks to infrastructure companies.
    That tinge of worry extends to Bun and Deno too. I think they can survive decently on momentum of developers being excited about the speed and features. I wouldn’t say I’ve got a full grasp on it, but I’ve seen some developers be pretty disillusioned or at least trepidatious with Deno and their package registry JSR. But Deno has products! They have enterprise consulting and various hosting. Data and product, I think that is all very smart. Mabe void(0) can find a product play in there. This all reminds me of XState / Stately which took a bit of funding, does open source, and productizes some of what they do. Their new Store library is getting lots of attention which is good for the gander.
    To be clear, I’m rooting for all of these companies. They are small and only lightly funded companies, just like CodePen, trying to make tools to make web development better. 💜
  18. by: Andy Clarke
    Mon, 27 Jan 2025 15:35:44 +0000

    Honestly, it’s difficult for me to come to terms with, but almost 20 years have passed since I wrote my first book, Transcending CSS. In it, I explained how and why to use what was the then-emerging Multi-Column Layout module.
    Hint: I published an updated version, Transcending CSS Revisited, which is free to read online.
    Perhaps because, before the web, I’d worked in print, I was over-excited at the prospect of dividing content into columns without needing extra markup purely there for presentation. I’ve used Multi-Column Layout regularly ever since. Yet, CSS Columns remains one of the most underused CSS layout tools. I wonder why that is?
    Holes in the specification
    For a long time, there were, and still are, plenty of holes in Multi-Column Layout. As Rachel Andrew — now a specification editor — noted in her article five years ago:
    She’s right. And that’s still true. You can’t style columns, for example, by alternating background colours using some sort of :nth-column() pseudo-class selector. You can add a column-rule between columns using border-style values like dashed, dotted, and solid, and who can forget those evergreen groove and ridge styles? But you can’t apply border-image values to a column-rule, which seems odd as they were introduced at roughly the same time. The Multi-Column Layout is imperfect, and there’s plenty I wish it could do in the future, but that doesn’t explain why most people ignore what it can do today.
    Patchy browser implementation for a long time
    Legacy browsers simply ignored the column properties they couldn’t process. But, when Multi-Column Layout was first launched, most designers and developers had yet to accept that websites needn’t look the same in every browser.
    Early on, support for Multi-Column Layout was patchy. However, browsers caught up over time, and although there are still discrepancies — especially in controlling content breaks — Multi-Column Layout has now been implemented widely. Yet, for some reason, many designers and developers I speak to feel that CSS Columns remain broken. Yes, there’s plenty that browser makers should do to improve their implementations, but that shouldn’t prevent people from using the solid parts today.
    Readability and usability with scrolling
    Maybe the main reason designers and developers haven’t embraced Multi-Column Layout as they have CSS Grid and Flexbox isn’t in the specification or its implementation but in its usability. Rachel pointed this out in her article:
    That’s true. No one would enjoy repeatedly scrolling up and down to read a long passage of content set in columns. She went on:
    But, let’s face it, thinking very carefully is what designers and developers should always be doing.
    Sure, if you’re dumb enough to dump a large amount of content into columns without thinking about its design, you’ll end up serving readers a poor experience. But why would you do that when headlines, images, and quotes can span columns and reset the column flow, instantly improving readability? Add to that container queries and newer unit values for text sizing, and there really isn’t a reason to avoid using Multi-Column Layout any longer.
    A brief refresher on properties and values
    Let’s run through a refresher. There are two ways to flow content into multiple columns; first, by defining the number of columns you need using the column-count property:
    CodePen Embed Fallback Second, and often best, is specifying the column width, leaving a browser to decide how many columns will fit along the inline axis. For example, I’m using column-width to specify that my columns are over 18rem. A browser creates as many 18rem columns as possible to fit and then shares any remaining space between them.
    CodePen Embed Fallback Then, there is the gutter (or column-gap) between columns, which you can specify using any length unit. I prefer using rem units to maintain the gutters’ relationship to the text size, but if your gutters need to be 1em, you can leave this out, as that’s a browser’s default gap.
    CodePen Embed Fallback The final column property is that divider (or column-rule) to the gutters, which adds visual separation between columns. Again, you can set a thickness and use border-style values like dashed, dotted, and solid.
    CodePen Embed Fallback These examples will be seen whenever you encounter a Multi-Column Layout tutorial, including CSS-Tricks’ own Almanac. The Multi-Column Layout syntax is one of the simplest in the suite of CSS layout tools, which is another reason why there are few reasons not to use it.
    Multi-Column Layout is even more relevant today
    When I wrote Transcending CSS and first explained the emerging Multi-Column Layout, there were no rem or viewport units, no :has() or other advanced selectors, no container queries, and no routine use of media queries because responsive design hadn’t been invented.
    We didn’t have calc() or clamp() for adjusting text sizes, and there was no CSS Grid or Flexible Box Layout for precise control over a layout. Now we do, and all these properties help to make Multi-Column Layout even more relevant today.
    Now, you can use rem or viewport units combined with calc() and clamp() to adapt the text size inside CSS Columns. You can use :has() to specify when columns are created, depending on the type of content they contain. Or you might use container queries to implement several columns only when a container is large enough to display them. Of course, you can also combine a Multi-Column Layout with CSS Grid or Flexible Box Layout for even more imaginative layout designs.
    Using Multi-Column Layout today
    Patty Meltt is an up-and-coming country music sensation. She’s not real, but the challenges of designing and developing websites like hers are. My challenge was to implement a flexible article layout without media queries which adapts not only to screen size but also whether or not a <figure> is present. To improve the readability of running text in what would potentially be too-long lines, it should be set in columns to narrow the measure. And, as a final touch, the text size should adapt to the width of the container, not the viewport.
    Article with no <figure> element. What would potentially be too-long lines of text are set in columns to improve readability by narrowing the measure. Article containing a <figure> element. No column text is needed for this narrower measure. The HTML for this layout is rudimentary. One <section>, one <main>, and one <figure> (or not:)
    <section> <main> <h1>About Patty</h1> <p>…</p> </main> <figure> <img> </figure> </section> I started by adding Multi-Column Layout styles to the <main> element using the column-width property to set the width of each column to 40ch (characters). The max-width and automatic inline margins reduce the content width and center it in the viewport:
    main { margin-inline: auto; max-width: 100ch; column-width: 40ch; column-gap: 3rem; column-rule: .5px solid #98838F; } Next, I applied a flexible box layout to the <section> only if it :has() a direct descendant which is a <figure>:
    section:has(> figure) { display: flex; flex-wrap: wrap; gap: 0 3rem; } This next min-width: min(100%, 30rem) — applied to both the <main> and <figure> — is a combination of the min-width property and the min() CSS function. The min() function allows you to specify two or more values, and a browser will choose the smallest value from them. This is incredibly useful for responsive layouts where you want to control the size of an element based on different conditions:
    section:has(> figure) main { flex: 1; margin-inline: 0; min-width: min(100%, 30rem); } section:has(> figure) figure { flex: 4; min-width: min(100%, 30rem); } What’s efficient about this implementation is that Multi-Column Layout styles are applied throughout, with no need for media queries to switch them on or off.
    Adjusting text size in relation to column width helps improve readability. This has only recently become easy to implement with the introduction of container queries, their associated values including cqi, cqw, cqmin, and cqmax. And the clamp() function. Fortunately, you don’t have to work out these text sizes manually as ClearLeft’s Utopia will do the job for you.
    My headlines and paragraph sizes are clamped to their minimum and maximum rem sizes and between them text is fluid depending on their container’s inline size:
    h1 { font-size: clamp(5.6526rem, 5.4068rem + 1.2288cqi, 6.3592rem); } h2 { font-size: clamp(1.9994rem, 1.9125rem + 0.4347cqi, 2.2493rem); } p { font-size: clamp(1rem, 0.9565rem + 0.2174cqi, 1.125rem); } So, to specify the <main> as the container on which those text sizes are based, I applied a container query for its inline size:
    main { container-type: inline-size; } Open the final result in a desktop browser, when you’re in front of one. It’s a flexible article layout without media queries which adapts to screen size and the presence of a <figure>. Multi-Column Layout sets text in columns to narrow the measure and the text size adapts to the width of its container, not the viewport.
    CodePen Embed Fallback Modern CSS is solving many prior problems
    Structure content with spanning elements which will restart the flow of columns and prevent people from scrolling long distances. Prevent figures from dividing their images and captions between columns. Almost every article I’ve ever read about Multi-Column Layout focuses on its flaws, especially usability. CSS-Tricks’ own Geoff Graham even mentioned the scrolling up and down issue when he asked, “When Do You Use CSS Columns?”
    Fortunately, the column-span property — which enables headlines, images, and quotes to span columns, resets the column flow, and instantly improves readability — now has solid support in browsers:
    h1, h2, blockquote { column-span: all; } But the solution to the scrolling up and down issue isn’t purely technical. It also requires content design. This means that content creators and designers must think carefully about the frequency and type of spanning elements, dividing a Multi-Column Layout into shallower sections, reducing the need to scroll and improving someone’s reading experience.
    Another prior problem was preventing headlines from becoming detached from their content and figures, dividing their images and captions between columns. Thankfully, the break-after property now also has widespread support, so orphaned images and captions are now a thing of the past:
    figure { break-after: column; } Open this final example in a desktop browser:
    CodePen Embed Fallback You should take a fresh look at Multi-Column Layout
    Multi-Column Layout isn’t a shiny new tool. In fact, it remains one of the most underused layout tools in CSS. It’s had, and still has, plenty of problems, but they haven’t reduced its usefulness or its ability to add an extra level of refinement to a product or website’s design. Whether you haven’t used Multi-Column Layout in a while or maybe have never tried it, now’s the time to take a fresh look at Multi-Column Layout.
    Revisiting CSS Multi-Column Layout originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  19. How to Update Kali Linux?

    By: Janus Atienza
    Sun, 26 Jan 2025 16:06:55 +0000

    Kali Linux is a Debian-based, open-source operating system that’s ideal for penetration testing, reverse engineering, security auditing, and computer forensics. It’s a rolling release model, as multiple updates of the OS are available in a year, offering you access to a pool of advanced tools that keep your software secure. But how to update Kali Linux to the latest version to avoid risks and compatibility issues? 
    To help you in this regard, we are going to discuss the step-by-step process of updating Kali Linux and its benefits. Let’s begin! 
    How to Update Kali Linux: Step-by-Step Guide 
    Being hired to build smart solutions, a lot of custom IoT development professionals use Kali Linux for advanced penetration testing and even reverse engineering. However, it is important to keep it updated to avoid vulnerabilities. 
    Before starting with how to update the Kali Linux process, you must have a stable internet connection and administrative rights. 
    Here are the steps you can follow for this: 
    Step 1: Check Resources List File 
    The Linux Kali package manager fetches updates from the repository, so you first need to make sure that the system’s repository list is properly configured and aligned. Here’s how to check it: 
    Open the terminal and run the following command to access the resources list file: http://kali.download/kali
    The output will display this list if your system is using the Kali Linux rolling release repository: deb http://kali.download/kali kali-rolling main contrib non-free non-free-firmware
    If the file is empty or has incorrect entries, you can edit it using editors like Nano or vim.  Once you are sure that the list has only official and correct Kali Linux entries, save and close the editor.  Step 2: Update the Package Information 
    The next step is to update the package information using the repository list so the Kali Linux system knows about all the latest versions and updates available. The steps for that are:
    In the terminal, run this given command: sudo apt update
    This command updates the system’s package index to the latest repository information. You also see a list of packages being checked and their status (available for upgrade or not). Note: It only extracts the list of new packages and doesn’t install or update them! 
    Step 3: Do a System Upgrade
    In how to update Kali Linux, the third step involves performing a system upgrade to install the latest versions and updates. 
    Run the apt upgrade command to update all the installed packages to their latest version. However, unlike a full system upgrade, this command doesn’t remove or install any package from the system.  You can then use the apt full-upgrade that upgrades all the packages and even add or remove some to keep your system running smoothly. The apt dist-upgrade is given when you want to handle package dependency changes, remove obsolete packages, and add new ones. Review all the changes that the commands have made and confirm the upgrade.  Step 4: Get Rid of Unnecessary Packages
    Over time, useless files can accumulate in your system, taking up valuable disc space. You should get rid of them to declutter the system and also enhance the overall storage. Here are the steps for that:
    To remove the leftover packages, run the command: sudo apt autoremove -y
    Cached files also take up a lot of disc space, and you can remove them via the following command:  sudo apt autoclean
    Step 5: Double-Check the Update 
    Once you are all done installing the latest software, you should double-check to see if the system is actually running the upgrade. For this, give the command: 
    cat /etc/os-release
    You can then see operating system information like version details and release date. 
    Step 6: It’s Time to Reboot the System 
    Well, this step is optional, but we suggest rebooting Kali Linux to ensure that the system is running the latest version and that all changes are fully applied. You can then perform tasks like security testing of custom IoT development processes. The command for this is: 
    sudo reboot
    Why Update Kali Linux to the Latest Version? 
    Software development and deployment trends are changing quickly. Now that you know how to update and upgrade Kali Linux, you must be wondering why you should update the system and what its impacts are. If so, here are some compelling reasons: 
    Security Fixes and Patches
    Cybercrimes are quickly increasing, and statistics show that 43% of organizations lose existing customers because of cyber attacks. Additionally, individuals lose around $318 billion to cybercrime. 
    However, when you update to the latest version of Kali Linux, there are advanced security fixes and patches. They remove all system vulnerabilities and make sure that professionals don’t fall victim to such malicious attempts. 
    Access to New Tools and Features 
    Kali Linux offers many features and tools like Metasploit, Nmap, and others, and they receive consistent updates from their developers. 
    So, upgrading the OS assures that you are using the latest version of all pre-installed tools. You enjoy better functionality and improved system performance that make your daily tasks more efficient.
    For instance, the updated version of Nmap has fast scanning capabilities that pave the way for quick security auditing and troubleshooting.
    Compatibility with New Technologies 
    Technology is evolving, and new protocols and software are introduced every day. The developers behind Kali Linux are well aware of these shifts. They are pushing regular updates that support these newer technologies for better system compatibility. 
    Conclusion 
    The process of how to update Kali Linux becomes easy if you are aware of the correct commands and understand the difference between upgrade options. Most importantly, don’t forget to reboot your system after a major update like Kernel to make sure that changes are configured properly. 
    FAQs 
    How often should I update Kali Linux? 
    It’s advisable to update Kali Linux at least once a week or whenever there are new system updates. The purpose is to make sure that the system is secure and has all the latest features by receiving security patches and addressing all vulnerabilities. 
    Can I update Kali Linux without using the terminal?
    No, you cannot update Kali Linux without using the terminal. To update the system, you can use the apt and apt-get commands. The steps involved in this process include checking the source file, updating the package repository, and upgrading the system. 
    Is Kali Linux good for learning cyber security? 
    Yes, Kali Linux is a good tool for learning cyber security. It has a range of tools for penetration testing, network security, analysis, and vulnerability scanning.
    The post How to Update Kali Linux? appeared first on Unixmen.
  20. By: Janus Atienza
    Sun, 26 Jan 2025 00:06:01 +0000

    AI-powered tools are changing the software development scene as we speak. AI assistants can not only help with coding, using advanced machine learning algorithms to improve their service, but they can also help with code refactoring, testing, and bug detection. Tools like GitHub, Tanbine, and Copilot aim to automate various processes, allowing developers more free time for other, more creative tasks. Of course, implementing AI tools takes time and careful risk assessment because various factors need to be taken into consideration. Let’s review some of the most popular automation tools available for Linux.
    Why Use AI-Powered Software Tools in Linux?
    AI is being widely used across various spheres of our lives with businesses utilizing the power of Artificial Intelligence to create new services and products. Even sites like Depositphotos have started offering AI services to create exclusive licensed photos that can be used anywhere – on websites, in advertising, design, and print media. Naturally, software development teams and Linux users have also started implementing AI-powered tools to improve their workflow. Here are some of the benefits of using such tools:
    An improved user experience. Fewer human errors in various processes. Automation of repetitive tasks boosts overall productivity. New features become available.  Innovative problem-solving. Top AI Automation Tools for Linux
    Streamlining processes can greatly increase productivity, allowing developers and Linux users to delegate repetitive tasks to AI-powered software. They offer innovative solutions while optimizing different parts of the development process. Let’s review some of them.
    1. GitHub Copilot
    Just a few years ago no one could’ve imagined that coding could be done by an AI algorithm. This AI-powered software can predict the completion of the code that’s being created, offering different functions and snippets on the go. GitHub Copilot can become an invaluable tool for both expert and novice coders. The algorithms can understand the code that’s being written using OpenAI’s Codex model. It supports various programming languages and can be easily integrated with the majority of IDEs. One of its key benefits is code suggestion based on the context of what’s being created.
    2. DeepCode
    One of the biggest issues all developers face when writing code is potential bugs. This is where an AI-powered code review tool can come in handy. While it won’t help you create the code, it will look for vulnerabilities inside your project, giving context-based feedback and a variety of suggestions to fix the bugs found by the program. Thus, it can help developers improve the quality of their work. DeepCode uses machine learning to become a better help over time, offering improved suggestions as it learns more about the type of work done by the developer. This tool can easily integrate with GitLab, GitHub, and Bitbucket.
    3. Tabnine
    Do you want an AI-powered tool that can actually learn from your coding style and offer suggestions based on it? Tabnine can do exactly that, predicting functions and offering snippets of code based on what you’re writing. It can be customized for a variety of needs and operations while supporting 30 programming languages. You can use this tool offline for improved security.
    4. CircleCl
    This is a powerful continuous integration and continuous delivery platform that helps automate software development operations. It helps engineering teams build code easily, offering automatic tests at each stage of the process, whenever a change is implemented in the system. You can develop your app fast and easily with CirlceCL’s automated testing that involves mobile, serverless, API, web, and AI frameworks. This is the CI/CD expert who will help you significantly reduce testing time and build simple and stable systems.
    5. Selenium
    This is one of the most popular testing tools used by developers all over the world. It’s compatible across various platforms, including Linux, due to the open-source nature of this framework. It offers a seamless process of generating and managing test cases, as well as compiling project reports. It can collaborate with continuous automated testing tools for better results.
    6. Code Intelligence
    This is yet another tool capable of analyzing the source code to detect bugs and vulnerabilities without human supervision. It can find inconsistencies that are often missed by other testing methods, allowing the developing teams to resolve issues before the software is released. This tool works autonomously and simplifies root cause analysis. It utilizes self-learning AI capabilities to boost the testing process and swiftly pinpoints the line of code that contains the bug.
    7. ONLYOFFICE Docs
    This open-source office suite allows real-time collaboration and offers a few interesting options when it comes to AI. You can install a plugin and get access to ChatGPT for free and use its features while creating a document. Some of the most handy ones include translation, spellcheck, grammar correction, word analysis, and text generation. You can also generate images for your documents and have a chat with ChatGPT while working on your project.
    Conclusion
    When it comes to the Linux operating system, there are numerous AI-powered automation tools you can try. A lot of them are used in software development to improve the code-writing process and allow developers to have more free time for other tasks. AI tools can utilize machine learning to provide you with better services while offering a variety of ways to streamline your workflow. Tools such as DeepCode, Tabnine, GitHub Copilot, and Selenium can look for solutions whenever you’re facing issues with your software. These programs will also offer snippets of code on the go while checking your project for bugs.
    The post How AI is Revolutionizing Linux System Administration: Tools and Techniques for Automation appeared first on Unixmen.
  21. By: Janus Atienza
    Sat, 25 Jan 2025 23:26:38 +0000

    In today’s digital age, safeguarding your communication is paramount. Email encryption serves as a crucial tool to protect sensitive data from unauthorized access. Linux users, known for their preference for open-source solutions, must embrace encryption to ensure privacy and security.
    With increasing cyber threats, the need for secure email communications has never been more critical. Email encryption acts as a protective shield, ensuring that only intended recipients can read the content of your emails. For Linux users, employing encryption techniques not only enhances personal data protection but also aligns with the ethos of secure and open-source computing. This guide will walk you through the essentials of setting up email encryption on Linux and how you can integrate advanced solutions to bolster your security.
    Setting up email encryption on Linux
    Implementing email encryption on Linux can be straightforward with the right tools. Popular email clients like Thunderbird and Evolution support OpenPGP and S/MIME protocols for encrypting emails. Begin by installing GnuPG, an open-source software that provides cryptographic privacy and authentication.
    Once installed, generate a pair of keys—a public key to share with those you communicate with and a private key that remains confidential to you. Configure your chosen email client to use these keys for encrypting and decrypting emails. The interface typically offers user-friendly options to enable encryption settings directly within the email composition window.
    To further assist in this setup, many online tutorials offer detailed guides complete with screenshots to ease the process for beginners. Additionally, staying updated with the latest software versions is recommended to ensure optimal security features are in place.
    How email encryption works
    Email encryption is a process that transforms readable text into a scrambled format that can only be decoded by the intended recipient. It is essential for maintaining privacy and security in digital communications. As technology advances, so do the methods used by cybercriminals to intercept sensitive information. Thus, understanding the principles of email encryption becomes crucial.
    The basic principle of encryption involves using keys—a public key for encrypting emails and a private key for decrypting them. This ensures that even if emails are intercepted during transmission, they remain unreadable without the correct decryption key. Whether you’re using email services like Gmail or Outlook, integrating encryption can significantly reduce the risk of data breaches.
    Many email providers offer built-in encryption features, but for Linux users seeking more control, there are numerous open-source tools available. Email encryption from Trustifi provides an additional layer of security by incorporating advanced AI-powered solutions into your existing setup.
    Integrating advanced encryption solutions
    For those seeking enhanced security measures beyond standard practices, integrating solutions like Trustifi into your Linux-based email clients can be highly beneficial. Trustifi offers services such as inbound threat protection and outbound email encryption powered by AI technology.
    The integration process involves installing Trustifi’s plugin or API into your existing email infrastructure. This enables comprehensive protection against potential threats while ensuring that encrypted communications are seamless and efficient. With Trustifi’s advanced algorithms, businesses can rest assured that their communications are safeguarded against both current and emerging cyber threats.
    This approach not only protects sensitive data but also simplifies compliance with regulatory standards regarding data protection and privacy. Businesses leveraging such tools position themselves better in preventing data breaches and maintaining customer trust.
    Best practices for secure email communication
    Beyond technical setups, maintaining secure email practices is equally important. Start by using strong passwords that combine letters, numbers, and symbols; avoid easily guessed phrases or patterns. Enabling two-factor authentication adds another layer of security by requiring additional verification steps before accessing accounts.
    Regularly updating software helps protect against vulnerabilities that hackers might exploit. Many systems offer automatic updates; however, manually checking for updates can ensure no critical patches are missed. Staying informed about the latest security threats allows users to adapt their strategies accordingly.
    Ultimately, being proactive about security measures cultivates a safer digital environment for both personal and professional communications. Adopting these practices alongside robust encryption technologies ensures comprehensive protection against unauthorized access.
    The post Mastering email encryption on Linux appeared first on Unixmen.
  22. by: Preethi
    Fri, 24 Jan 2025 14:59:25 +0000

    When it comes to positioning elements on a page, including text, there are many ways to go about it in CSS — the literal position property with corresponding inset-* properties, translate, margin, anchor() (limited browser support at the moment), and so forth. The offset property is another one that belongs in that list.
    The offset property is typically used for animating an element along a predetermined path. For instance, the square in the following example traverses a circular path:
    <div class="circle"> <div class="square"></div> </div> @property --p { syntax: '<percentage>'; inherits: false; initial-value: 0%; } .square { offset: top 50% right 50% circle(50%) var(--p); transition: --p 1s linear; /* Equivalent to: offset-position: top 50% right 50%; offset-path: circle(50%); offset-distance: var(--p); */ /* etc. */ } .circle:hover .square{ --p: 100%; } CodePen Embed Fallback A registered CSS custom property (--p) is used to set and animate the offset distance of the square element. The animation is possible because an element can be positioned at any point in a given path using offset. and maybe you didn’t know this, but offset is a shorthand property comprised of the following constituent properties:
    offset-position: The path’s starting point offset-path: The shape along which the element can be moved offset-distance: A distance along the path on which the element is moved offset-rotate: The rotation angle of an element relative to its anchor point and offset path offset-anchor: A position within the element that’s aligned to the path The offset-path property is the one that’s important to what we’re trying to achieve. It accepts a shape value — including SVG shapes or CSS shape functions — as well as reference boxes of the containing element to create the path.
    Reference boxes? Those are an element’s dimensions according to the CSS Box Model, including content-box, padding-box, border-box, as well as SVG contexts, such as the view-box, fill-box, and stroke-box. These simplify how we position elements along the edges of their containing elements. Here’s an example: all the small squares below are placed in the default top-left corner of their containing elements’ content-box. In contrast, the small circles are positioned along the top-right corner (25% into their containing elements’ square perimeter) of the content-box, border-box, and padding-box, respectively.
    <div class="big"> <div class="small circle"></div> <div class="small square"></div> <p>She sells sea shells by the seashore</p> </div> <div class="big"> <div class="small circle"></div> <div class="small square"></div> <p>She sells sea shells by the seashore</p> </div> <div class="big"> <div class="small circle"></div> <div class="small square"></div> <p>She sells sea shells by the seashore</p> </div> .small { /* etc. */ position: absolute; &.square { offset: content-box; border-radius: 4px; } &.circle { border-radius: 50%; } } .big { /* etc. */ contain: layout; /* (or position: relative) */ &:nth-of-type(1) { .circle { offset: content-box 25%; } } &:nth-of-type(2) { border: 20px solid rgb(170 232 251); .circle { offset: border-box 25%; } } &:nth-of-type(3) { padding: 20px; .circle { offset: padding-box 25%; } } } CodePen Embed Fallback Note: You can separate the element’s offset-positioned layout context if you don’t want to allocated space for it inside its containing parent element. That’s how I’ve approached it in the example above so that the paragraph text inside can sit flush against the edges. As a result, the offset positioned elements (small squares and circles) are given their own contexts using position: absolute, which removes them from the normal document flow.
    This method, positioning relative to reference boxes, makes it easy to place elements like notification dots and ornamental ribbon tips along the periphery of some UI module. It further simplifies the placement of texts along a containing block’s edges, as offset can also rotate elements along the path, thanks to offset-rotate. A simple example shows the date of an article placed at a block’s right edge:
    <article> <h1>The Irreplaceable Value of Human Decision-Making in the Age of AI</h1> <!-- paragraphs --> <div class="date">Published on 11<sup>th</sup> Dec</div> <cite>An excerpt from the HBR article</cite> </article> article { container-type: inline-size; /* etc. */ } .date { offset: padding-box 100cqw 90deg / left 0 bottom -10px; /* Equivalent to: offset-path: padding-box; offset-distance: 100cqw; (100% of the container element's width) offset-rotate: 90deg; offset-anchor: left 0 bottom -10px; */ } CodePen Embed Fallback As we just saw, using the offset property with a reference box path and container units is even more efficient — you can easily set the offset distance based on the containing element’s width or height. I’ll include a reference for learning more about container queries and container query units in the “Further Reading” section at the end of this article.
    There’s also the offset-anchor property that’s used in that last example. It provides the anchor for the element’s displacement and rotation — for instance, the 90 degree rotation in the example happens from the element’s bottom-left corner. The offset-anchor property can also be used to move the element either inward or outward from the reference box by adjusting inset-* values — for instance, the bottom -10px arguments pull the element’s bottom edge outwards from its containing element’s padding-box. This enhances the precision of placements, also demonstrated below.
    <figure> <div class="big">4</div> <div class="small">number four</div> </figure> .small { width: max-content; offset: content-box 90% -54deg / center -3rem; /* Equivalent to: offset-path: content-box; offset-distance: 90%; offset-rotate: -54deg; offset-anchor: center -3rem; */ font-size: 1.5rem; color: navy; } CodePen Embed Fallback As shown at the beginning of the article, offset positioning is animateable, which allows for dynamic design effects, like this:
    <article> <figure> <div class="small one">17<sup>th</sup> Jan. 2025</div> <span class="big">Seminar<br>on<br>Literature</span> <div class="small two">Tickets Available</div> </figure> </article> @property --d { syntax: "<percentage>"; inherits: false; initial-value: 0%; } .small { /* other style rules */ offset: content-box var(--d) 0deg / left center; /* Equivalent to: offset-path: content-box; offset-distance: var(--d); offset-rotate: 0deg; offset-anchor: left center; */ transition: --d .2s linear; &.one { --d: 2%; } &.two { --d: 70%; } } article:hover figure { .one { --d: 15%; } .two { --d: 80%; } } CodePen Embed Fallback Wrapping up
    Whether for graphic designs like text along borders, textual annotations, or even dynamic texts like error messaging, CSS offset is an easy-to-use option to achieve all of that. We can position the elements along the reference boxes of their containing parent elements, rotate them, and even add animation if needed.
    Further reading
    The CSS offset-path property: CSS-Tricks, MDN The CSS offset-anchor property: CSS-Tricks, MDN Container query length units: CSS-Tricks, MDN The @property at-rule: CSS-Tricks, web.dev The CSS Box Model: CSS-Tricks SVG Reference Boxes: W3C Positioning Text Around Elements With CSS Offset originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  23. by: Geoff Graham
    Thu, 23 Jan 2025 17:21:15 +0000

    I was reading through Juan’s recent Almanac entry for the @counter-style at-rule and I’ll be darned if he didn’t uncover and unpack some extremely interesting things that we can do to style lists, notably the list marker. You’re probably already aware of the ::marker pseudo-element. You’ve more than likely dabbled with custom counters using counter-reset and counter-increment. Or maybe your way of doing things is to wipe out the list-style (careful when doing that!) and hand-roll a marker on the list item’s ::before pseudo.
    But have you toyed around with @counter-style? Turns out it does a lot of heavy lifting and opens up new ways of working with lists and list markers.
    You can style the marker of just one list item
    This is called a “fixed” system set to a specific item.
    @counter-style style-fourth-item { system: fixed 4; symbols: "💠"; suffix: " "; } li { list-style: style-fourth-item; } CodePen Embed Fallback You can assign characters to specific markers
    If you go with an “additive” system, then you can define which symbols belong to which list items.
    @counter-style dice { system: additive; additive-symbols: 6 "⚅", 5 "⚄", 4 "⚃", 3 "⚂", 2 "⚁", 1 "⚀"; suffix: " "; } li { list-style: dice; } CodePen Embed Fallback Notice how the system repeats once it reaches the end of the cycle and begins a new series based on the first item in the pattern. So, for example, there are six sides to typical dice and we start rolling two dice on the seventh list item, totaling seven.
    You can add a prefix and suffix to list markers
    A long while back, Chris showed off a way to insert punctuation at the end of a list marker using the list item’s ::before pseudo:
    ol { list-style: none; counter-reset: my-awesome-counter; li { counter-increment: my-awesome-counter; &::before { content: counter(my-awesome-counter) ") "; } } } That’s much easier these days with @counter-styles:
    @counter-style parentheses { system: extends decimal; prefix: "("; suffix: ") "; } CodePen Embed Fallback You can style multiple ranges of list items
    Let’s say you have a list of 10 items but you only want to style items 1-3. We can set a range for that:
    @counter-style single-range { system: extends upper-roman; suffix: "."; range: 1 3; } li { list-style: single-range; } CodePen Embed Fallback We can even extend our own dice example from earlier:
    @counter-style dice { system: additive; additive-symbols: 6 "⚅", 5 "⚄", 4 "⚃", 3 "⚂", 2 "⚁", 1 "⚀"; suffix: " "; } @counter-style single-range { system: extends dice; suffix: "."; range: 1 3; } li { list-style: single-range; } CodePen Embed Fallback Another way to do that is to use the infinite keyword as the first value:
    @counter-style dice { system: additive; additive-symbols: 6 "⚅", 5 "⚄", 4 "⚃", 3 "⚂", 2 "⚁", 1 "⚀"; suffix: " "; } @counter-style single-range { system: extends dice; suffix: "."; range: infinite 3; } li { list-style: single-range; } Speaking of infinite, you can set it as the second value and it will count up infinitely for as many list items as you have.
    Maybe you want to style two ranges at a time and include items 6-9. I’m not sure why the heck you’d want to do that but I’m sure you (or your HIPPO) have got good reasons.
    @counter-style dice { system: additive; additive-symbols: 6 "⚅", 5 "⚄", 4 "⚃", 3 "⚂", 2 "⚁", 1 "⚀"; suffix: " "; } @counter-style multiple-ranges { system: extends dice; suffix: "."; range: 1 3, 6 9; } li { list-style: multiple-ranges; } CodePen Embed Fallback You can add padding around the list markers
    You ever run into a situation where your list markers are unevenly aligned? That usually happens when going from, say, a single digit to a double-digit. You can pad the marker with extra characters to line things up.
    /* adds leading zeroes to list item markers */ @counter-style zero-padded-example { system: extends decimal; pad: 3 "0"; } Now the markers will always be aligned… well, up to 999 items.
    CodePen Embed Fallback That’s it!
    I just thought those were some pretty interesting ways to work with list markers in CSS that run counter (get it?!) to how I’ve traditionally approached this sort of thing. And with @counter-style becoming Baseline “newly available” in September 2023, it’s well-supported in browsers.
    Some Things You Might Not Know About Custom Counter Styles originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  24. by: Abhishek Kumar
    Thu, 23 Jan 2025 11:22:15 +0530

    Imagine this: You’ve deployed a handful of Docker containers to power your favorite applications, maybe a self-hosted Nextcloud for your files, a Pi-hole for ad-blocking, or even a media server like Jellyfin.
    Everything is running like a charm, but then you hit a common snag: keeping those containers updated.
    When a new image is released, you’ll need to manually pull it, stop the running container, recreate it with the updated image, and hope everything works as expected.
    Multiply that by the number of containers you’re running, and it’s clear how this quickly becomes a tedious and time-consuming chore.
    But there’s more at stake than just convenience. Skipping updates or delaying them for too long can lead to outdated software running in your containers, which often means unpatched vulnerabilities.
    These can become a serious security risk, especially if you’re hosting services exposed to the internet.
    This is where Watchtower steps in, a tool designed to take the hassle out of container updates by automating the entire process.
    Whether you’re running a homelab or managing a production environment, Watchtower ensures your containers are always up-to-date and secure, all with minimal effort on your part.
    What is Watchtower?
    Watchtower is an open-source tool that automatically monitors your Docker containers and updates them whenever a new version of their image is available.
    It keeps your setup up-to-date, saving time and reducing the risk of running outdated containers.
    But it’s not just a "set it and forget it" solution, it’s also highly customizable, allowing you to tailor its behavior to fit your workflow.
    Whether you prefer full automation or staying in control of updates, Watchtower has you covered.
    How does it work?
    Watchtower works by periodically checking for updates to the images of your running containers. When it detects a newer version, it pulls the updated image, stops the current container, and starts a new one using the updated image.
    The best part? It maintains your existing container configuration, including port bindings, volume mounts, and environment variables.
    If your containers depend on each other, Watchtower handles the update process in the correct order to avoid downtime.
    Deploying watchtower
    Since you’re reading this article, I’ll assume you already have some sort of homelab or Docker setup where you want to automate container updates. That means I won’t be covering Docker installation here.
    When it comes to deploying Watchtower, it can be done in two ways:
    Docker run
    If you’re just trying it out or want a straightforward deployment, you can run the following command:
    docker run -d --name watchtower -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower This will spin up a Watchtower container that monitors your running containers and updates them automatically.
    But here’s the thing, I’m not a fan of the docker run command.
    It’s quick, sure, but I prefer stack approach rather than cramming everything into a single command.
    Docker compose
    If you facny using Docker Compose to run Watchtower, here’s a minimal configuration that replicates the docker run command above:
    version: "3.8" services: watchtower: image: containrrr/watchtower container_name: watchtower restart: always volumes: - /var/run/docker.sock:/var/run/docker.sock To start Watchtower using this configuration, save it as docker-compose.yml and run:
    docker-compose up -d This will give you the same functionality as the docker run command, but in a cleaner, more manageable format.
    Customizing watchtower with environment variables
    Running Watchtower plainly is all good, but we can make it even better with environment variables and command arguments.
    Personally, I don’t like giving full autonomy to one service to automatically make changes on my behalf.
    Since I have a pretty decent homelab running crucial containers, I prefer using Watchtower to notify me about updates rather than updating everything automatically.
    This ensures that I remain in control, especially for containers that are finicky or require a perfect pairing with their databases.
    Sneak peak into my homelab
    Take a look at my homelab setup: it’s mostly CMS containers for myself and for clients, and some of them can behave unpredictably if not updated carefully.
    So instead of letting Watchtower update everything, I configure it to provide insights and alerts, and then I manually decide which updates to apply.
    To achieve this, we’ll add the following environment variables to our Docker Compose file:
    .tg {border-collapse:collapse;border-spacing:0;} .tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px; overflow:hidden;padding:10px 5px;word-break:normal;} .tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px; font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;} .tg .tg-0pky{border-color:inherit;text-align:left;vertical-align:top} Environment Variable Description WATCHTOWER_CLEANUP Removes old images after updates, keeping your Docker host clean. WATCHTOWER_POLL_INTERVAL Sets how often Watchtower checks for updates (in seconds). One hour (3600 seconds) is a good balance. WATCHTOWER_LABEL_ENABLE Updates only containers with specific labels, giving you granular control. WATCHTOWER_DEBUG Enables detailed logs, which can be invaluable for troubleshooting. WATCHTOWER_NOTIFICATIONS Configures the notification method (e.g., email) to keep you informed about updates. WATCHTOWER_NOTIFICATION_EMAIL_FROM The email address from which notifications will be sent. WATCHTOWER_NOTIFICATION_EMAIL_TO The recipient email address for update notifications. WATCHTOWER_NOTIFICATION_EMAIL_SERVER SMTP server address for sending notifications. WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PORT Port used by the SMTP server (commonly 587 for TLS). WATCHTOWER_NOTIFICATION_EMAIL_USERNAME SMTP server username for authentication. WATCHTOWER_NOTIFICATION_EMAIL_PASSWORD SMTP server password for authentication. Here’s how the updated docker-compose.yml file would look:
    version: "3.8" services: watchtower: image: containrrr/watchtower container_name: watchtower restart: always environment: WATCHTOWER_CLEANUP: "true" WATCHTOWER_POLL_INTERVAL: "3600" WATCHTOWER_LABEL_ENABLE: "true" WATCHTOWER_DEBUG: "true" WATCHTOWER_NOTIFICATIONS: "email" WATCHTOWER_NOTIFICATION_EMAIL_FROM: "admin@example.com" WATCHTOWER_NOTIFICATION_EMAIL_TO: "notify@example.com" WATCHTOWER_NOTIFICATION_EMAIL_SERVER: "smtp.example.com" WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PORT: "587" WATCHTOWER_NOTIFICATION_EMAIL_USERNAME: "your_email_username" WATCHTOWER_NOTIFICATION_EMAIL_PASSWORD: "your_email_password" volumes: - /var/run/docker.sock:/var/run/docker.sock I like to put my credentials in a separate environment file.Once you run the Watchtower container for the first time, you'll receive an initial email confirming that the service is up and running.
    Here's an example of what that email might look like:
    After some time, as Watchtower analyzes your setup and scans the running containers, it will notify you if it detects any updates available for your containers.
    These notifications are sent in real-time and look something like this:
    This feature ensures you're always in the loop about potential updates without having to check manually.
    Final thoughts
    I’m really impressed by Watchtower and have been using it for a month now.
    I recommend, if possible, to play around with it in an isolated environment first, that’s what I did before deploying it in my homelab.
    The email notification feature is great, but my inbox now looks totally filled with Watchtower emails, so I might create a rule to manage them better. Overall, no complaints so far! I find it better than the Docker Compose method we discussed earlier.
    Updating Docker Containers With Zero DowntimeA step by step methodology that can be very helpful in your day to day DevOps activities without sacrificing invaluable uptime.Linux HandbookAvimanyu BandyopadhyayWhat about you? What do you use to update your containers?
    If you’ve tried Watchtower, share your experience, anything I should be mindful of?
    Let us know in the comments!
  25. by: Geoff Graham
    Tue, 21 Jan 2025 14:21:32 +0000

    Chris wrote about “Likes” pages a long while back. The idea is rather simple: “Like” an item in your RSS reader and display it in a feed of other liked items. The little example Chris made is still really good.
    CodePen Embed Fallback There were two things Chris noted at the time. One was that he used a public CORS proxy that he wouldn’t use in a production environment. Good idea to nix that, security and all. The other was that he’d consider using WordPress transients to fetch and cache the data to work around CORS.
    I decided to do that! The result is this WordPress block I can drop right in here. I’ll plop it in a <details> to keep things brief.
    Open Starred Feed Link on 1/16/2025Don’t Wrap Figure in a Link
    adrianroselli.com In my post Brief Note on Figure and Figcaption Support I demonstrate how, when encountering a figure with a screen reader, you won’t hear everything announced at once:
    Link on 1/15/2025Learning HTML is the best investment I ever did
    christianheilmann.com One of the running jokes and/or discussion I am sick and tired of is people belittling HTML. Yes, HTML is not a programming language. No, HTML should not just be a compilation target. Learning HTML is a solid investment and not hard to do.
    I am not…
    Link on 1/14/2025Open Props UI
    nerdy.dev Presenting Open Props UI!…
    Link on 1/12/2025Gotchas in Naming CSS View Transitions
    blog.jim-nielsen.comI’m playing with making cross-document view transitions work on this blog.
    Nothing fancy. Mostly copying how Dave Rupert does it on his site where you get a cross-fade animation on the whole page generally, and a little position animation on the page title specifically.

    Link on 1/6/2025The :empty pseudo-class
    html-css-tip-of-the-week.netlify.appWe can use the :empty pseudo-class as a way to style elements on your webpage that are empty.
    You might wonder why you’d want to style something that’s empty. Let’s say you’re creating a todo list.
    You want to put your todo items in a list, but what about when you don’t…
    Link on 1/8/2025CSS Wish List 2025
    meyerweb.comBack in 2023, I belatedly jumped on the bandwagon of people posting their CSS wish lists for the coming year.  This year I’m doing all that again, less belatedly! (I didn’t do it last year because I couldn’t even.  Get it?)
    I started this post by looking at what I…
    Link on 1/9/2025aria-description Does Not Translate
    adrianroselli.com It does, actually. In Firefox. Sometimes.
    A major risk of using ARIA to define text content is it typically gets overlooked in translation. Automated translation services often do not capture it. Those who pay for localization services frequently miss content in ARIA attributes when sending text strings to localization vendors.
    Content buried…
    Link on 1/8/2025Let’s Standardize Async CSS!
    scottjehl.com6 years back I posted the Simplest Way to Load CSS Asynchronously to document a hack we’d been using for at least 6 years prior to that. The use case for this hack is to load CSS files asynchronously, something that HTML itself still does not support, even though…
    Link on 1/9/2025Tight Mode: Why Browsers Produce Different Performance Results
    smashingmagazine.comThis article is a sponsored by DebugBear
    I was chatting with DebugBear’s Matt Zeunert and, in the process, he casually mentioned this thing called Tight Mode when describing how browsers fetch and prioritize resources. I wanted to nod along like I knew what he was talking about…
    Link on 12/19/2024Why I’m excited about text-box-trim as a designer
    piccalil.liI’ve been excited by the potential of text-box-trim, text-edge and text-box for a while. They’re in draft status at the moment, but when more browser support is available, this capability will open up some exciting possibilities for improving typesetting in the browser, as well as giving us more…
    It’s a little different. For one, I’m only fetching 10 items at a time. We could push that to infinity but that comes with a performance tax, not to mention I have no way of organizing the items for them to be grouped and filtered. Maybe that’ll be a future enhancement!
    The Chris demo provided the bones and it does most of the heavy lifting. The “tough” parts were square-pegging the thing into a WordPress block architecture and then getting transients going. This is my first time working with transients, so I thought I’d share the relevant code and pick it apart.
    function fetch_and_store_data() { $transient_key = 'fetched_data'; $cached_data = get_transient($transient_key); if ($cached_data) { return new WP_REST_Response($cached_data, 200); } $response = wp_remote_get('https://feedbin.com/starred/a22c4101980b055d688e90512b083e8d.xml'); if (is_wp_error($response)) { return new WP_REST_Response('Error fetching data', 500); } $body = wp_remote_retrieve_body($response); $data = simplexml_load_string($body, 'SimpleXMLElement', LIBXML_NOCDATA); $json_data = json_encode($data); $array_data = json_decode($json_data, true); $items = []; foreach ($array_data['channel']['item'] as $item) { $items[] = [ 'title' => $item['title'], 'link' => $item['link'], 'pubDate' => $item['pubDate'], 'description' => $item['description'], ]; } set_transient($transient_key, $items, 12 * HOUR_IN_SECONDS); return new WP_REST_Response($items, 200); } add_action('rest_api_init', function () { register_rest_route('custom/v1', '/fetch-data', [ 'methods' => 'GET', 'callback' => 'fetch_and_store_data', ]); }); Could this be refactored and written more efficiently? All signs point to yes. But here’s how I grokked it:
    function fetch_and_store_data() { } The function’s name can be anything. Naming is hard. The first two variables:
    $transient_key = 'fetched_data'; $cached_data = get_transient($transient_key); The $transient_key is simply a name that identifies the transient when we set it and get it. In fact, the $cached_data is the getter so that part’s done. Check!
    I only want the $cached_data if it exists, so there’s a check for that:
    if ($cached_data) { return new WP_REST_Response($cached_data, 200); } This also establishes a new response from the WordPress REST API, which is where the data is cached. Rather than pull the data directly from Feedbin, I’m pulling it and caching it in the REST API. This way, CORS is no longer an issue being that the starred items are now locally stored on my own domain. That’s where the wp_remote_get() function comes in to form that response from Feedbin as the origin:
    $response = wp_remote_get('https://feedbin.com/starred/a22c4101980b055d688e90512b083e8d.xml'); Similarly, I decided to throw an error if there’s no $response. That means there’s no freshly $cached_data and that’s something I want to know right away.
    if (is_wp_error($response)) { return new WP_REST_Response('Error fetching data', 500); } The bulk of the work is merely parsing the XML data I get back from Feedbin to JSON. This scours the XML and loops through each item to get its title, link, publish date, and description:
    $body = wp_remote_retrieve_body($response); $data = simplexml_load_string($body, 'SimpleXMLElement', LIBXML_NOCDATA); $json_data = json_encode($data); $array_data = json_decode($json_data, true); $items = []; foreach ($array_data['channel']['item'] as $item) { $items[] = [ 'title' => $item['title'], 'link' => $item['link'], 'pubDate' => $item['pubDate'], 'description' => $item['description'], ]; } “Description” is a loaded term. It could be the full body of a post or an excerpt — we don’t know until we get it! So, I’m splicing and trimming it in the block’s Edit component to stub it at no more than 50 words. There’s a little risk there because I’m rendering the HTML I get back from the API. Security, yes. But there’s also the chance I render an open tag without its closing counterpart, muffing up my layout. I know there are libraries to address that but I’m keeping things simple for now.
    Now it’s time to set the transient once things have been fetched and parsed:
    set_transient($transient_key, $items, 12 * HOUR_IN_SECONDS); The WordPress docs are great at explaining the set_transient() function. It takes three arguments, the first being the $transient_key that was named earlier to identify which transient is getting set. The other two:
    $value: This is the object we’re storing in the named transient. That’s the $items object handling all the parsing. $expiration: How long should this transient last? It wouldn’t be transient if it lingered around forever, so we set an amount of time expressed in seconds. Mine lingers for 12 hours before it expires and then updates the next time a visitor hits the page. OK, time to return the items from the REST API as a new response:
    return new WP_REST_Response($items, 200); That’s it! Well, at least for setting and getting the transient. The next thing I realized I needed was a custom REST API endpoint to call the data. I really had to lean on the WordPress docs to get this going:
    add_action('rest_api_init', function () { register_rest_route('custom/v1', '/fetch-data', [ 'methods' => 'GET', 'callback' => 'fetch_and_store_data', ]); }); That’s where I struggled most and felt like this all took wayyyyy too much time. Well, that and sparring with the block itself. I find it super hard to get the front and back end components to sync up and, honestly, a lot of that code looks super redundant if you were to scope it out. That’s another story altogether.
    Enjoy reading what we’re reading! I put a page together that pulls in the 10 most recent items with a link to subscribe to the full feed.
    Creating a “Starred” Feed originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.