
Everything posted by Blogger
-
The Mistakes of CSS
by: Juan Diego Rodríguez Thu, 30 Jan 2025 14:31:08 +0000 Surely you have seen a CSS property and thought “Why?” For example: Or: You are not alone. CSS was born in 1996 (it can legally order a beer, you know!) and was initially considered a way to style documents; I don’t think anyone imagined everything CSS would be expected to do nearly 30 years later. If we had a time machine, many things would be done differently to match conventions or to make more sense. Heck, even the CSS Working Group admits to wanting a time-traveling contraption… in the specifications! If by some stroke of opportunity, I was given free rein to rename some things in CSS, a couple of ideas come to mind, but if you want more, you can find an ongoing list of mistakes made in CSS… by the CSS Working Group! Take, for example, background-repeat: Why not fix them? Sadly, it isn’t as easy as fixing something. People already built their websites with these quirks in mind, and changing them would break those sites. Consider it technical debt. This is why I think the CSS Working Group deserves an onslaught of praise. Designing new features that are immutable once shipped has to be a nerve-wracking experience that involves inexact science. It’s not that we haven’t seen the specifications change or evolve in the past — they most certainly have — but the value of getting things right the first time is a beast of burden. The Mistakes of CSS originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
FOSS Weekly #25.05: LibreOffice Tip, Launcher Customization, Moving Away from Google and More
by: Abhishek Prakash In the previous newsletter, I shared the new tools directory page proposal and asked for your feedback. From the responses I got, an overwhelming majority of FOSSers liked this idea. So I'll work on such pages. Since I want them to have some additional features, they will take a little longer. I'll inform you once they are live. Stay tuned 😄 Would you like to see more pages like this? 💬 Let's see what else you get in this edition A new Hyprland release. FSF's new commemorative logo. Microsoft's popular offering being handed a lawsuit. And other Linux news, tips and, of course, memes! This edition of FOSS Weekly is supported by ONLYOFFICE. ✨ONLYOFFICE PDF Editor: Create, Edit and Collaborate on PDFs on LinuxThe ONLYOFFICE suite now offers an updated PDF editor that comes equipped with collaborative PDF editing and other useful features. Deploy ONLYOFFICE Docs on your Linux server and integrate it with your favourite platform, such as Nextcloud, ownCloud, Drupal, Moodle, WordPress, Redmine and more. Alternatively, you can download the free desktop app for your Linux distro. Online PDF editor, reader and converter | ONLYOFFICE View and create PDF files from any text document, spreadsheet or presentation, convert PDF to DOCX online, create fillable PDF forms. ONLYOFFICE 📰 Linux and Open Source NewsHyprland 0.47.0 released with HDR support and squircles. Bitwarden has tightened security for accounts without 2FA enabled. Mozilla Thunderbird 134 has landed with a new notification system. A few offers on Data Privacy Day. Microsoft has launched DocumentDB, an open source document store platform. The Free Software Foundation (FSF) has unveiled a fresh logo to commemorate its forthcoming 40th anniversary. 🧠 What We’re Thinking AboutFacebook is banning the links from many Linux websites. Everything is Spam on Facebook Unless It is Paid Post (or Actual Spam) Linux websites are getting ill-treatment by Facebook. It's FOSS News Abhishek Microsoft's popular social media platform, LinkedIn, has been dragged to court over alleged misuse of user data. 🧮 Linux Tips, Tutorials and MoreMoving away from Google's ecosystem is a smart move in the long run. You can share files between guest and host operating system in GNOME boxes. Small tip on tracking changes and version control with LibreOffice. Learn to merge PDF files in Linux. And some tips on customizing the launcher in Ubuntu. 👷 Maker's and AI CornerRunning the impressive DeepSeek R1 AI model on a Raspberry Pi 5 is possible. I Ran Deepseek R1 on Raspberry Pi 5 and No, it Wasn’t 200 tokens/s Everyone is seeking Deepseek R1 these days. Is it really as good as everyone claims? Let me share my experiments of running it on a Raspberry Pi. It's FOSSAbhishek Kumar ✨ Apps highlightIf you like listening to audiobooks, then Cozy can be a great addition to your Linux system. Cozy: A Super Useful Open Source Audiobook Player for Linux Cozy makes audiobook listening easy with simple controls and an intuitive interface. It's FOSS NewsSourav Rudra Take your music anywhere with the open source Musify app. 🛍️ Deal You Would Love15 Linux and DevOps books for just $18 plus your purchase supports Code for America organization. Get them on Humble Bundle. Humble Tech Book Bundle: Linux from Beginner to Professional by O’Reilly Learn Linux with ease using this library of coding and programming courses by O’Reilly. Pay what you want & support Code For America. Humble Bundle 🎟️ Event alertFoss FEST 2025 is open for registration. Groups of international students can participate in the hackathon and win prizes worth 4,000 euros. It's FOSS is an official media partner for this event. Foss FEST 2025: International Hackathon OpenSource Science B.V. 🧩 Quiz TimeCan you beat this Linux Directory Structure puzzle? Linux Directory Structure: Puzzle The Linux directory structure is fascinating and an important thing to know about. Take a guess to solve this puzzle! It's FOSSAnkush Das 💡 Quick Handy TipYou can easily open new windows for running apps by either Middle-Clicking or Ctrl+Left-Clicking on the app from the dock. It also works for apps that are not running. Usually, the apps open in the same workspace, however in multi-monitor setups, this might open new app windows on the other monitor. 🤣 Meme of the WeekWindows got destroyed hard. 🤭 🗓️ Tech TriviaApple launched the iPad on April 3, 2010, redefining mobile computing with its touch-based, versatile design. It bridged the gap between smartphones and laptops, setting the standard for tablets. The iPad's success has reshaped the tech world and inspired countless imitators. 🧑🤝🧑 FOSSverse CornerPro FOSSer, Daniel is showcasing his Gentoo virtual machine setup on his laptop. Gentoo vm install on my laptop Up early this morning putting the finishing touches to Gentoo VM, to reboot to the CLI!! I have compiled two kernels, a gentoo-source, that I did a manual compile and a gentoo-kernel-dis for backup!! If the gentoo-source kernel works, I will nuke the gentoo-kernel. Just for fun, take a look-see Been awhile since we have had a 8 inch snow!!! It's FOSS CommunityDaniel_Phillips ❤️ With loveShare it with your Linux-using friends and encourage them to subscribe (hint: it's here). Share the articles in Linux Subreddits and community forums. Follow us on Google News and stay updated in your News feed. Opt for It's FOSS Plus membership and support us 🙏 Enjoy FOSS 😄
-
LHB Linux Digest #25.02: Linux Books, Watchtower, ReplicaSets and More
by: Abhishek Prakash Wed, 29 Jan 2025 20:04:25 +0530 What's in a name? Sometimes the name can be deceptive. For example, in the Linux Tips and Tutorials section of this newsletter, I am sharing a few commands that have nothing to do with what their name indicates 😄 Here are the other highlights of this edition of LHB Linux Digest: Nice and renice commandsReplicaSet in KubernetesSelf hosted code snippet managerAnd more tools, tips and memes for youThis edition of LHB Linux Digest newsletter is supported by RELIANOID.❇️Comprehensive Load Balancing Solutions For Modern NetworksRELIANOID’s load balancing solutions combine the power of SD-WAN, secure application delivery, and elastic load balancing to optimize traffic distribution and ensure unparalleled performance. With features like a robust Web Application Firewall (WAF) and built-in DDoS protection, your applications remain secure and resilient against cyber threats. High availability ensures uninterrupted access, while open networking and user experience networking enhance flexibility and deliver a seamless experience across all environments, from on-premises to cloud. Free Load Balancer Download | Community Edition by RELIANOIDDiscover our Free Load Balancer | Community Edition | The best Open Source Load Balancing software for providing high availability and content switching servicesRELIANOIDAdmin📖 Linux Tips and TutorialsThere is an install command in Linux but it doesn't install anythingThere is a hash command in Linux but it doesn't have anything to do with hash-passwordThere is a tree command in Linux but it has nothing to do with plantsThere is a wc command in Linux and it has nothing to do with washrooms 🚻 (I understand that you know what wc stands for in the command but I still find it amusing)Using nice and renice commands to change process priority. Change Process Priority WIth nice and renice CommandsYou can modify if a certain process should get priority in consuming CPU with nice and renice commands.Linux HandbookHelder This post is for subscribers only Subscribe now Already have an account? Sign in
-
What on Earth is the `types` Descriptor in View Transitions?
by: Juan Diego Rodríguez Wed, 29 Jan 2025 14:13:53 +0000 Have you ever stumbled upon something new and went to research it just to find that there is little-to-no information about it? It’s a mixed feeling: confusing and discouraging because there is no apparent direction, but also exciting because it’s probably new to lots of people, not just you. Something like that happened to me while writing an Almanac’s entry for the @view-transition at-rule and its types descriptor. You may already know about Cross-Document View Transitions: With a few lines of CSS, they allow for transitions between two pages, something that in the past required a single-app framework with a side of animation library. In other words, lots of JavaScript. To start a transition between two pages, we have to set the @view-transition at-rule’s navigation descriptor to auto on both pages, and that gives us a smooth cross-fade transition between the two pages. So, as the old page fades out, the new page fades in. @view-transition { navigation: auto; } That’s it! And navigation is the only descriptor we need. In fact, it’s the only descriptor available for the @view-transition at-rule, right? Well, turns out there is another descriptor, a lesser-known brother, and one that probably envies how much attention navigation gets: the types descriptor. What do people say about types? Cross-Documents View Transitions are still fresh from the oven, so it’s normal that people haven’t fully dissected every aspect of them, especially since they introduce a lot of new stuff: a new at-rule, a couple of new properties and tons of pseudo-elements and pseudo-classes. However, it still surprises me the little mention of types. Some documentation fails to even name it among the valid @view-transition descriptors. Luckily, though, the CSS specification does offer a little clarification about it: To be more precise, types can take a space-separated list with the names of the active types (as <custom-ident>), or none if there aren’t valid active types for that page. Name: types For: @view-transition Value: none | <custom-ident>+ Initial: none So the following values would work inside types: @view-transition { navigation: auto; types: bounce; } /* or a list */ @view-transition { navigation: auto; types: bounce fade rotate; } Yes, but what exactly are “active” types? That word “active” seems to be doing a lot of heavy lifting in the CSS specification’s definition and I want to unpack that to better understand what it means. Active types in view transitions The problem: A cross-fade animation for every page is good, but a common thing we need to do is change the transition depending on the pages we are navigating between. For example, on paginated content, we could slide the content to the right when navigating forward and to the left when navigating backward. In a social media app, clicking a user’s profile picture could persist the picture throughout the transition. All this would mean defining several transitions in our CSS, but doing so would make them conflict with each other in one big slop. What we need is a way to define several transitions, but only pick one depending on how the user navigates the page. The solution: Active types define which transition gets used and which elements should be included in it. In CSS, they are used through :active-view-transition-type(), a pseudo-class that matches an element if it has a specific active type. Going back to our last example, we defined the document’s active type as bounce. We could enclose that bounce animation behind an :active-view-transition-type(bounce), such that it only triggers on that page. /* This one will be used! */ html:active-view-transition-type(bounce) { &::view-transition-old(page) { /* Custom Animation */ } &::view-transition-new(page) { /* Custom Animation */ } } This prevents other view transitions from running if they don’t match any active type: /* This one won't be used! */ html:active-view-transition-type(slide) { &::view-transition-old(page) { /* Custom Animation */ } &::view-transition-new(page) { /* Custom Animation */ } } I asked myself whether this triggers the transition when going to the page, when out of the page, or in both instances. Turns out it only limits the transition when going to the page, so the last bounce animation is only triggered when navigating toward a page with a bounce value on its types descriptor, but not when leaving that page. This allows for custom transitions depending on which page we are going to. The following demo has two pages that share a stylesheet with the bounce and slide view transitions, both respectively enclosed behind an :active-view-transition-type(bounce) and :active-view-transition-type(slide) like the last example. We can control which page uses which view transition through the types descriptor. The first page uses the bounce animation: @view-transition { navigation: auto; types: bounce; } The second page uses the slide animation: @view-transition { navigation: auto; types: slide; } You can visit the demo here and see the full code over at GitHub. The types descriptor is used more in JavaScript The main problem is that we can only control the transition depending on the page we’re navigating to, which puts a major cap on how much we can customize our transitions. For instance, the pagination and social media examples we looked at aren’t possible just using CSS, since we need to know where the user is coming from. Luckily, using the types descriptor is just one of three ways that active types can be populated. Per spec, they can be: Passed as part of the arguments to startViewTransition(callbackOptions) Mutated at any time, using the transition’s types Declared for a cross-document view transition, using the types descriptor. The first option is when starting a view transition from JavaScript, but we want to trigger them when the user navigates to the page by themselves (like when clicking a link). The third option is using the types descriptor which we already covered. The second option is the right one for this case! Why? It lets us set the active transition type on demand, and we can perform that change just before the transition happens using the pagereveal event. That means we can get the user’s start and end page from JavaScript and then set the correct active type for that case. I must admit, I am not the most experienced guy to talk about this option, so once I demo the heck out of different transitions with active types I’ll come back with my findings! In the meantime, I encourage you to read about active types here if you are like me and want more on view transitions: View transition types in cross-document view transitions (Bramus) Customize the direction of a view transition with JavaScript (Umar Hansa) What on Earth is the `types` Descriptor in View Transitions? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
Understanding ReplicaSet in Kubernetes With Hands-on Example
by: LHB Community Wed, 29 Jan 2025 18:26:26 +0530 Kubernetes is a powerful container orchestration platform that enables developers to manage and deploy containerized applications with ease. One of its key components is the ReplicaSet, which plays a critical role in ensuring high availability and scalability of applications. In this guide, we will explore the ReplicaSet, its purpose, and how to create and manage it effectively in your Kubernetes environment. What is a ReplicaSet in Kubernetes?A ReplicaSet in Kubernetes is a higher-level abstraction that ensures a specified number of pod replicas are running at all times. If a pod crashes or becomes unresponsive, the ReplicaSet automatically creates a new pod to maintain the desired state. This guarantees high availability and resilience for your applications. The key purposes of a ReplicaSet include: Scaling Pods: ReplicaSets manage the replication of pods, ensuring the desired number of replicas are always running.High Availability: Ensures that your application remains available even if one or more pods fail.Self-Healing: Automatically replaces failed pods to maintain the desired state.Efficient Workload Management: Helps distribute workloads across nodes in the cluster.How Does a ReplicaSet Work?ReplicaSets rely on selectors to match pods using labels. It uses these selectors to monitor the pods and ensures the actual number of pods matches the specified replica count. If the number is less than the desired count, new pods are created. If it’s greater, excess pods are terminated. Creating a ReplicaSetTo create a ReplicaSet, you define its configuration in a YAML file. Here’s an example: Example YAML Configuration apiVersion: apps/v1 kind: ReplicaSet metadata: name: nginx-replicaset labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.21 ports: - containerPort: 80 In this YAML file: replicas: Specifies the desired number of pod replicas.selector: Matches pods with the label app=nginx.template: Defines the pod’s specifications, including the container image and port.Deploying a ReplicaSetOnce you have the YAML file ready, follow these steps to deploy it in your Kubernetes cluster. Apply the YAML configuration to create the ReplicaSet: kubectl apply -f nginx-replicaset.yaml Verify that the ReplicaSet was created and the pods are running: kubectl get replicaset Output: NAME DESIRED CURRENT READY AGE nginx-replicaset 3 3 3 5s View the pods to check the pods created by the ReplicaSet: kubectl get pods Output: NAME READY STATUS RESTARTS AGE nginx-replicaset-xyz12 1/1 Running 0 10s nginx-replicaset-abc34 1/1 Running 0 10s nginx-replicaset-lmn56 1/1 Running 0 10s Scaling a ReplicaSetYou can easily scale the number of replicas in a ReplicaSet. For example, to scale the above ReplicaSet to 5 replicas: kubectl scale replicaset nginx-replicaset --replicas=5 Verify the updated state: kubectl get replicaset Output: NAME DESIRED CURRENT READY AGE nginx-replicaset 5 5 5 2m Learn Kubernetes OperatorLearn to build, test and deploy Kubernetes Opeartor using Kubebuilder as well as Operator SDK in this course.Linux HandbookTeam LHBConclusionA ReplicaSet is an essential component of Kubernetes, ensuring the desired number of pod replicas are running at all times. By leveraging ReplicaSets, you can achieve high availability, scalability, and self-healing for your applications with ease. Whether you’re managing a small application or a large-scale deployment, understanding ReplicaSets is crucial for effective workload management. ✍️Author: Hitesh Jethwa has more than 15+ years of experience with Linux system administration and DevOps. He likes to explain complicated topics in easy to understand way.
-
Top Container Monitoring Solutions: Tools to Keep Your Deployments Running Smoothly
by: Satoshi Nakamoto Wed, 29 Jan 2025 16:53:22 +0530 A few years ago, we witnessed a shift to containers and in current times, I believe containers have become an integral part of the IT infrastructure for most companies. Traditional monitoring tools often fall short in providing the visibility needed to ensure performance, security, and reliability. According to my experience, monitoring resource allocation is the most important part of deploying containers and that is why I found the top container monitoring solutions offering real-time insights into your containerized environments. Top Container Monitoring Solutions Before I jump into details, here's a brief of all the tools which I'll be discussing in a moment: Tool Pricing & Plans Free Tier? Key Free Tier Features Key Paid Plan Features Middleware Free up to 100GB; pay-as-you-go at $0.3/GB; custom enterprise plans Yes Up to 100GB data, 1k RUM sessions, 20k synthetic checks, 14-day retention Unlimited data volume; data pipeline & ingestion control; single sign-on; dedicated support Datadog Free plan (limited hosts & 1-day metric retention); Pro starts at $15/host/month; Enterprise from $23 Yes Basic infrastructure monitoring for up to 5 hosts; limited metric retention Extended retention, advanced anomaly detection, over 750 integrations, multi-cloud support Prometheus & Grafana Open-source; no licensing costs Yes Full-featured metrics collection (Prometheus), custom dashboards (Grafana) Self-managed support only; optional managed services through third-party providers Dynatrace 15-day free trial; usage-based: $0.04/hour for infrastructure-only, $0.08/hour for full-stack Trial Only N/A (trial only) AI-driven root cause analysis, automatic topology discovery, enterprise support, multi-cloud observability Sematext Free plan (Basic) with limited container monitoring; paid plans start at $0.007/container/hour Yes Live metrics for a small number of containers, 30-minute retention, limited alert rules Increased container limits, extended retention, unlimited alert rules, full-stack monitoring Sysdig Free tier; Sysdig Monitor starts at $20/host/month; Sysdig Secure is $60/host/month Yes Basic container monitoring, limited metrics and retention Advanced threat detection, vulnerability management, compliance checks, Prometheus support SolarWinds No permanent free plan; pricing varies by module (starts around $27.50/month or $2995 single license) Trial Only N/A (trial only) Pre-built Docker templates, application-centric mapping, hardware health, synthetic monitoring Splunk Observability Cloud starts at $15/host/month (annual billing); free trial available Trial Only N/A (trial only) Real-time log and metrics analysis, AI-based anomaly detection, multi-cloud integrations, advanced alerting MetricFire Paid plans start at $19/month; free trial offered Trial Only N/A (trial only) Integration with Graphite and Prometheus, customizable dashboards, real-time alerts SigNoz Open-source (self-hosted) or custom paid support Yes Full observability stack (metrics, traces, logs) with no licensing costs Commercial support, managed hosting services, extended retention options Here, "N/A (trial only)" means that the tool does not offer a permanent free tier but provides a limited-time free trial for users to test its features. After the trial period ends, users must subscribe to a paid plan to continue using the tool. Essentially, there is no free version available for long-term use—only a temporary trial. 1. MiddlewareMiddleware is an excellent choice for teams looking for a free or scalable container monitoring solution. It provides pre-configured dashboards for Kubernetes environments and real-time visibility into container health. With a free tier supporting up to 100GB of data and a pay-as-you-go model at $0.3/GB thereafter, it’s ideal for startups or small teams. Key features: Pre-configured dashboards for KubernetesReal-time metrics trackingAlerts for critical eventsCorrelation of metrics with logs and tracesPros: Free tier availableEasy setup with minimal configurationScalable pricing modelCons: Limited advanced features compared to premium toolsTry Middleware2. DatadogDatadog is a premium solution offering observability across infrastructure, applications, and logs. Its auto-discovery feature makes it particularly suited for dynamic containerized environments. The free plan supports up to five hosts with limited retention. Paid plans start at $15 per host per month. Key features: Real-time performance trackingAnomaly detection using MLAuto-discovery of new containersDistributed tracing and APMPros: Extensive integrations (750+)User-friendly interfaceAdvanced visualization toolsCons: High cost for small teamsPricing can vary based on usage spikesTry Datadog3. Prometheus & GrafanaThis open-source duo provides powerful monitoring and visualization capabilities. Prometheus has an edge in metrics collection with its PromQL query language, while Grafana offers stunning visualizations. This eventually makes it perfect for teams seeking customization without licensing costs. Key features: Time-series data collectionFlexible query language (PromQL)Customizable dashboardsIntegrated alerting systemPros: Free to useHighly customizableStrong community supportCons: Requires significant setup effortLimited out-of-the-box functionalityTry Prometheus & Grafana4. DynatraceDynatrace is an AI-powered observability platform designed for large-scale hybrid environments. It automates topology discovery and offers you deep insights into containerized workloads. Pricing starts at $0.04/hour for infrastructure-only monitoring. Key features: AI-powered root cause analysisAutomatic topology mappingReal-user monitoringCloud-native support (Kubernetes/OpenShift)Pros: Automated configurationScalability for large environmentsEnd-to-end visibilityCons: Expensive for smaller teamsProprietary platform limits flexibilityTry Dynatrace5. SematextSematext is a lightweight tool that allows users to monitor metrics and logs across Docker and Kubernetes platforms. Its free plan supports basic container monitoring with limited retention and alerting rules. Paid plans start at just $0.007/container/hour. Key features: Unified dashboard for logs and metricsReal-time insights into containers and hostsAuto-discovery of new containersAnomaly detection and alertingPros: Affordable pricing plansSimple setup processFull-stack observability featuresCons: Limited advanced features compared to premium toolsTry Sematext7. SolarWindsSolarWinds offers an intuitive solution for SMBs needing straightforward container monitoring. While it doesn’t offer a permanent free plan, its pricing starts at around $27.50/month or $2995 as a one-time license fee. Key features: Pre-built Docker templatesApplication-centric performance trackingHardware health monitoringDependency mappingPros: Easy deployment and setupOut-of-the-box templatesSuitable for smaller teamsCons: Limited flexibility compared to open-source toolsTry SolarWinds8. SplunkSplunk not only provides log analysis but also provides strong container monitoring capabilities through its Observability Cloud suite. Pricing starts at $15/host/month on annual billing. Key features: Real-time log and metrics analysisAI-based anomaly detectionCustomizable dashboards and alertsIntegration with OpenTelemetry standardsPros: Powerful search capabilitiesScalable architectureExtensive integrationsCons: High licensing costs for large-scale deploymentsTry Splunk9. MetricFire It simplifies container monitoring by offering customizable dashboards and seamless integration with Kubernetes and Docker. MetricFire is ideal for teams looking for a reliable hosted solution without the hassle of managing infrastructure. Pricing starts at $19/month. Key features: Hosted Graphite and Grafana dashboardsReal-time performance metricsIntegration with Kubernetes and DockerCustomizable alerting systemsPros: Easy setup and configurationScales effortlessly as metrics growTransparent pricing modelStrong community supportCons: Limited advanced features compared to proprietary toolsRequires technical expertise for full customizationTry MetricFire10. SigNozSigNoz is an open-source alternative to proprietary APM tools like Datadog and New Relic. It offers a platform for logs, metrics, and traces under one interface. With native OpenTelemetry support and a focus on distributed tracing for microservices architectures, SigNoz is perfect for organizations seeking cost-effective yet powerful observability solutions. Key features: Distributed tracing for microservicesReal-time metrics collectionCentralized log managementCustomizable dashboardsNative OpenTelemetry supportPros: Completely free if self-hostedActive development communityCost-effective managed cloud optionComprehensive observability stackCons: Requires infrastructure setup if self-hostedLimited enterprise-level support compared to proprietary toolsTry SigNozEvaluate your infrastructure complexity and budget to select the best tool that aligns with your goals!
-
White-Label Link Building for Linux-Based Websites: Saving Time and Resources
By: Janus Atienza Tue, 28 Jan 2025 23:16:45 +0000 As a digital marketing agency, your focus is to provide high-quality services to your clients while ensuring that operations run smoothly. However, managing the various components of SEO, such as link-building, can be time-consuming and resource-draining. This is where white-label link-building services come into play. By outsourcing your link-building efforts, you can save time and resources, allowing your agency to focus on more strategic tasks that directly contribute to your clients’ success. Below, we’ll explore how these services can benefit your agency in terms of time and resource management. Focus on Core Competencies When you choose to outsource your link-building efforts to a white-label service, it allows your agency to focus on your core competencies. As an agency, you may excel in content strategy, social media marketing, or paid advertising. However, link-building requires specialized knowledge, experience, and resources. A white-label link-building service can handle this aspect of SEO for you, freeing up time for your team to focus on what they do best. This way, you can maintain a high level of performance in other areas without spreading your team too thin. Eliminate the Need for Specialized Staff Building a successful link-building strategy requires expertise, which may not be available within your existing team. Hiring specialized staff to manage outreach campaigns, content creation, and link placements can be expensive and time-consuming. However, white-label link-building services already have the necessary expertise and resources in place. You won’t need to hire or train new employees to handle this aspect of SEO. The service provider’s team can execute campaigns quickly and effectively, allowing your agency to scale without expanding its internal workforce. Access to Established Relationships and Networks Link-building is not just about placing links on any website; it’s about building relationships with authoritative websites in your client’s industry, especially within relevant open-source projects and Linux communities. This process takes time to establish and requires continuous effort. A white-label link-building service typically has established relationships with high-authority websites, bloggers, and influencers across various industries. By leveraging these networks, they can secure quality backlinks faster and more efficiently than your agency could on its own. This reduces the time spent on outreach and relationship-building, ensuring that your client’s SEO efforts are moving forward without delays. For Linux-focused sites, this can include participation in relevant forums and contributing to open-source projects. Efficient Campaign Execution White-label link-building services are designed to execute campaigns efficiently. These agencies have streamlined processes and advanced tools that allow them to scale campaigns while maintaining quality. They can manage multiple campaigns at once, ensuring that your clients’ link-building needs are met in a timely manner. By outsourcing to a provider with a proven workflow, you can avoid the inefficiencies associated with trying to build an in-house link-building team. This leads to faster execution, better results, and more satisfied clients. Cost-Effectiveness Managing link-building in-house can be costly. Aside from the salaries and benefits of hiring staff, you’ll also need to invest in tools, software, and outreach efforts. White-label link-building services, on the other hand, offer more cost-effective solutions. These providers typically offer packages that include all necessary tools, such as backlink analysis software, outreach platforms, and reporting tools, which can be expensive to purchase and maintain on your own. By outsourcing, you save money on infrastructure and overhead costs, all while getting access to the best tools available. Reduce Time Spent on Reporting and Analysis Effective link-building campaigns require consistent monitoring, analysis, and reporting. Generating reports, tracking backlink quality, and assessing the impact of links on search rankings can be time-consuming tasks. When you outsource this responsibility to a white-label link-building service, they will handle reporting on your behalf. The provider will deliver customized reports that highlight key metrics like the number of backlinks acquired, domain authority, traffic increases, and overall SEO performance. This allows you to deliver the necessary information to your clients while saving time on report generation and analysis. For Linux-based servers, this can also involve analyzing server logs for SEO-related issues. Scalability and Flexibility As your agency grows, so does the demand for SEO services. One of the challenges agencies face is scaling their link-building efforts to accommodate more clients or larger campaigns. A white-label link-building service offers scalability and flexibility, meaning that as your client base grows, the provider can handle an increased volume of link-building efforts without compromising on quality. Whether you’re managing a single campaign or hundreds of clients, a reliable white-label service can adjust to your needs and ensure that every client receives the attention their SEO efforts deserve. Mitigate Risks Associated with Link-Building Link-building, if not done properly, can result in penalties from search engines, harming your client’s SEO performance. Managing link-building campaigns in-house without proper knowledge of SEO best practices can lead to mistakes, such as acquiring low-quality or irrelevant backlinks. White-label link-building services are experts in following search engine guidelines and using ethical link-building practices. By outsourcing, you reduce the risk of penalties, ensuring that your clients’ SEO efforts are safe and aligned with best practices. Stay Up-to-Date with SEO Trends SEO is an ever-evolving field, and staying up-to-date with the latest trends and algorithm updates can be a full-time job. White-label link-building services are dedicated to staying current with industry changes. By outsourcing your link-building efforts, you can be sure that the provider is implementing the latest techniques and best practices in their campaigns. This ensures that your client’s link-building strategies are always aligned with search engine updates, maximizing their chances of success. This includes familiarity with SEO tools that run on Linux, such as command-line tools and open-source crawlers, and understanding the nuances of optimizing websites hosted on Linux servers. Conclusion White-label link-building services offer significant time and resource savings for digital marketing agencies. By outsourcing link-building efforts, your agency can focus on core business areas, eliminate the need for specialized in-house staff, and streamline campaign execution. The cost-effectiveness and scalability of these services also make them an attractive option for agencies looking to grow their SEO offerings without overextending their resources. Especially for clients using Linux-based infrastructure, leveraging a white-label service with expertise in this area can be a significant advantage. With a trusted white-label link-building partner, you can deliver high-quality backlinks to your clients, improve their SEO rankings, and drive long-term success. The post White-Label Link Building for Linux-Based Websites: Saving Time and Resources appeared first on Unixmen.
-
Sigma Browser
by: aiparabellum.com Tue, 28 Jan 2025 07:28:06 +0000 In the digital age, where online privacy and security are paramount, tools like Sigma Browser are gaining significant attention. Sigma Browser is a privacy-focused web browser designed to provide users with a secure, fast, and ad-free browsing experience. Built with advanced features to protect user data and enhance online anonymity, Sigma Browser is an excellent choice for individuals and businesses alike. In this article, we’ll dive into its features, how it works, benefits, pricing, and more to help you understand why Sigma Browser is a standout in the world of secure browsing. Features of Sigma Browser AI Sigma Browser offers a range of features tailored to ensure privacy, speed, and convenience. Here are some of its key features: Ad-Free Browsing: Enjoy a seamless browsing experience without intrusive ads. Enhanced Privacy: Built-in privacy tools to block trackers and protect your data. Fast Performance: Optimized for speed, ensuring quick page loads and smooth navigation. Customizable Interface: Personalize your browsing experience with themes and settings. Cross-Platform Sync: Sync your data across multiple devices for a unified experience. Secure Browsing: Advanced encryption to keep your online activities private. How It Works Sigma Browser is designed to be user-friendly while prioritizing security. Here’s how it works: Download and Install: Simply download Sigma Browser from its official website and install it on your device. Set Up Privacy Settings: Customize your privacy preferences, such as blocking trackers and enabling encryption. Browse Securely: Start browsing the web with enhanced privacy and no ads. Sync Across Devices: Log in to your account to sync bookmarks, history, and settings across multiple devices. Regular Updates: The browser receives frequent updates to improve performance and security. Benefits of Sigma Browser AI Using Sigma Browser comes with numerous advantages: Improved Privacy: Protects your data from third-party trackers and advertisers. Faster Browsing: Eliminates ads and optimizes performance for quicker loading times. User-Friendly: Easy to set up and use, even for non-tech-savvy individuals. Cross-Device Compatibility: Access your browsing data on any device. Customization: Tailor the browser to suit your preferences and needs. Pricing Sigma Browser offers flexible pricing plans to cater to different users: Free Version: Includes basic features like ad-free browsing and privacy protection. Premium Plan: Unlocks advanced features such as cross-device sync and priority support. Pricing details are available on the official website. Sigma Browser Review Sigma Browser has received positive feedback from users for its focus on privacy and performance. Many appreciate its ad-free experience and the ability to customize the interface. The cross-platform sync feature is also a standout, making it a convenient choice for users who switch between devices. Some users have noted that the premium plan could offer more features, but overall, Sigma Browser is highly regarded for its security and ease of use. Conclusion Sigma Browser is a powerful tool for anyone looking to enhance their online privacy and browsing experience. With its ad-free interface, robust privacy features, and fast performance, it stands out as a reliable choice in the crowded browser market. Whether you’re a casual user or a business professional, Sigma Browser offers the tools you need to browse securely and efficiently. Give it a try and experience the difference for yourself. Visit Website The post Sigma Browser appeared first on AI Parabellum.
-
Sigma Browser
by: aiparabellum.com Tue, 28 Jan 2025 07:28:06 +0000 In the digital age, where online privacy and security are paramount, tools like Sigma Browser are gaining significant attention. Sigma Browser is a privacy-focused web browser designed to provide users with a secure, fast, and ad-free browsing experience. Built with advanced features to protect user data and enhance online anonymity, Sigma Browser is an excellent choice for individuals and businesses alike. In this article, we’ll dive into its features, how it works, benefits, pricing, and more to help you understand why Sigma Browser is a standout in the world of secure browsing. Features of Sigma Browser AI Sigma Browser offers a range of features tailored to ensure privacy, speed, and convenience. Here are some of its key features: Ad-Free Browsing: Enjoy a seamless browsing experience without intrusive ads. Enhanced Privacy: Built-in privacy tools to block trackers and protect your data. Fast Performance: Optimized for speed, ensuring quick page loads and smooth navigation. Customizable Interface: Personalize your browsing experience with themes and settings. Cross-Platform Sync: Sync your data across multiple devices for a unified experience. Secure Browsing: Advanced encryption to keep your online activities private. How It Works Sigma Browser is designed to be user-friendly while prioritizing security. Here’s how it works: Download and Install: Simply download Sigma Browser from its official website and install it on your device. Set Up Privacy Settings: Customize your privacy preferences, such as blocking trackers and enabling encryption. Browse Securely: Start browsing the web with enhanced privacy and no ads. Sync Across Devices: Log in to your account to sync bookmarks, history, and settings across multiple devices. Regular Updates: The browser receives frequent updates to improve performance and security. Benefits of Sigma Browser AI Using Sigma Browser comes with numerous advantages: Improved Privacy: Protects your data from third-party trackers and advertisers. Faster Browsing: Eliminates ads and optimizes performance for quicker loading times. User-Friendly: Easy to set up and use, even for non-tech-savvy individuals. Cross-Device Compatibility: Access your browsing data on any device. Customization: Tailor the browser to suit your preferences and needs. Pricing Sigma Browser offers flexible pricing plans to cater to different users: Free Version: Includes basic features like ad-free browsing and privacy protection. Premium Plan: Unlocks advanced features such as cross-device sync and priority support. Pricing details are available on the official website. Sigma Browser Review Sigma Browser has received positive feedback from users for its focus on privacy and performance. Many appreciate its ad-free experience and the ability to customize the interface. The cross-platform sync feature is also a standout, making it a convenient choice for users who switch between devices. Some users have noted that the premium plan could offer more features, but overall, Sigma Browser is highly regarded for its security and ease of use. Conclusion Sigma Browser is a powerful tool for anyone looking to enhance their online privacy and browsing experience. With its ad-free interface, robust privacy features, and fast performance, it stands out as a reliable choice in the crowded browser market. Whether you’re a casual user or a business professional, Sigma Browser offers the tools you need to browse securely and efficiently. Give it a try and experience the difference for yourself. Visit Website The post Sigma Browser appeared first on AI Parabellum.
-
How I am Moving Away From Google's Ecosystem
by: Ankush Das Google's ecosystem includes several products and services. It is one of the prominent ecosystems on the internet with a dominating market share. While I believe their products require no introduction, as a formality, I should mention some of them as Gmail, YouTube, Google Chrome, Google Drive, Google Search, Google Photos, and Google Gemini. Considering I am an Android user, and prioritize my convenience, I have been using Google services for a long time now. However, I have decided to move away from Google's ecosystem to try out other options. Sure, it is tough to eliminate their presence without affecting my convenience, so the aim is to minimize it. Before I tell you my strategy to make the move to alternative options, let me tell you why. Why The Move Away From Google's Ecosystem?Without a surprise, the first reason for the switch is privacy. While I can never be anonymous on the internet, I can share fewer details about me and my data with the services I use. And, Google is the least preferred privacy-friendly choice out there. They get to know a lot of about from the data you store with them. And, it is not illegal, but it is something that I no longer would like to do. I should mention that I'm not mixing security with privacy here. Google has been good enough to keep my accounts secure, or else I would not have been safely using my Gmail account for such a long time. Next, I want to give a chance to innovative players in the market. The options may not be necessarily better than Google services, but they have their unique selling points, which could unlock a benefit for me that I never realized. And, finally, giving open source a better chance. Now, let me highlight how I exactly plan to move from Google's ecosystem. 📋 There are some affiliate links in the article. Please read our affiliate policy for more details. First Step: Finding Out The Areas With AlternativesIf you know there are options, it is easy to switch. So, first, I set out to find the type of services which provide me with options. Some easy ones include: Search engine Email + Calendar Cloud storage AI chatbot Video conference Website analytics Browser And, here are the ones that are a bit tricky to move away from: Document Suite Photo collection Video sharing platform Maps I will tackle all of them, but let us focus on the easy ones first. The Best Google Alternatives For My Use-Case (And Hopefully For You!)Now that we know what kind of alternatives we need, I just need to narrow down the options that are capable enough. Type Service Alternative(s) Email Gmail ProtonMail, Tuta Search Engine Google DuckDuckGo Calendar Google Calendar Proton Cloud Storage Google Drive pCloud, Proton AI Chatbot Gemini Llama with Ollama Video Meetings Google Meet Jitsi Meet Website Analytics Google Analytics Umami, Fathom Web Browser Chrome Firefox Documents Google Docs Proton Docs, Nextcloud Password Manager Chrome's Built In Bitwarden, Proton Pass Phone photo Backup Google Photos Ente Yes, I can list countless alternatives, but it is useless if it cannot do half of the things Google services let me do. The goal is not to aimlessly move away from Google, but pick meaningful alternatives. So, with the above-mentioned categories in mind, I tell you about the best options that I use along with some suggestions for you. ✋ Please note that the alternatives I am using or suggesting may not be an exact replacement of a Google service. They might be good enough for me but perhaps they won't suffice your need or preference. Try them on your own and see if you are comfortable with the alternatives. Search Engine: DuckDuckGo In my opinion, DuckDuckGo is a great alternative to Google search engine that promises to keep your personal activity private while offering additional privacy-focused services. For instance, I use DuckDuckGo's Privacy Essentials browser extension to get rid of trackers, and to generate free email aliases. I can also integrate the email aliases with Bitwarden. Not to forget, they also have a privacy-friendly AI chatbot providing access to AI models like Gpt-4o mini, and a browser for Windows/macOS. So, DuckDuckGo's ecosystem makes it an interesting choice for what I need. 📋 You can also try some privacy-focused search engines like Ecosia, Startpage, and searx. Email + Calendar: Proton Mail Proton Mail is an obvious choice here because of its Google-like ecosystem offering. You can also try Tuta for privacy-focused email. For me, I have been using Proton Mail since its early days. So, it makes sense to stick with it, along with the potential of using Proton's other services (which we will also touch upon as you read on). Here, the relevant service to use with the email is Proton Calendar: I use it for free (personally), and if I want to scale up my requirements, I can upgrade it to a paid plan for more storage, and features like custom email domains. And, for my work at It's FOSS, we have a visionary (paid) account, so we get more storage, and all the perks of premium Proton services available. I would say both free and premium options justify their use-cases to replace Gmail and Google Calendar for me. Cloud Storage Considering I already use Proton Mail (and its calendar), it is a no-brainer choice to go with Proton Drive, being an end-to-end encrypted (E2E) option. You only get 5 GB of space, which is lower than what you get with Google. But, it should be decent if you want to store a couple of important documents. And, if you want more, and aren't concerned about E2E, you can opt for pCloud, which offers more storage space, and explore other privacy-focused cloud storage services. Top 10 Best Free Cloud Storage Services for Linux Which cloud service is the best for Linux? Check out this list of free cloud storage services that you can use in Linux. It's FOSSAbhishek Prakash AI ChatbotA replacement to Google Gemini? That's easy. I just set up Ollama to use one of the best open source LLMs and a web UI to easily access it without the terminal. 12 Tools to Provide a Web UI for Ollama Don’t want to use the CLI for Ollama for interacting with AI models? Fret not, we have some neat Web UI tools that you can use to make it easy! It's FOSSAnkush Das Everything runs locally, I do not have to worry about anything here. Of course, you need a decent system with a mid-range GPU like the RTX 3060 to run the AI models efficiently. Video ConferenceGoogle Meet is a dominantly used option for all kinds of meetings/interviews, but it is not open source, nor end-to-end encrypted. You can opt for Jitsi Meet if you want enhanced security and privacy. It is open source, and end-to-end encrypted. If needed, you can self-host it to meet your custom requirements. For more options, you can refer to our article as well: 5 Best Open Source Video Conferencing Tools [2024] Don’t trust the big tech with your data? Try these open source video conferencing tools for online meetings. It's FOSSAnkush Das Website AnalyticsGoogle Analytics is a popular choice among web administrators because it is free, and provides a lot of information about your audience to take better decisions regarding your content, product, and more. However, there are better (and lighter) Google Analytics alternatives out there that respect the privacy of your visitors/customers, and still give you plenty of useful insights. The catch is most of them are paid. I understand not everyone can afford to switch to a paid alternative, but you can start with the free ones (or self-host them), and choose to invest in paid options later. Explore all the alternatives here: 8 Best Google Analytics Alternatives You Can Try Today It is not that complicated to switch away from Google Analytics with these options. You can opt for a free self-hosted one or go for a paid one as per your requirements. Linux HandbookAnkush Das Web BrowserGoogle Chrome holds more than 65% of the market share as per Statista. Sure, it is one of the most convenient options. However, it does not offer good privacy protection features. If you want to switch away to get better privacy, you can try Brave, Mullvad, or LibreWolf (a hardened version of Firefox). Documents, Photos, Videos, and MapHonestly, there are no easy replacements to this. So, you will have to decide if you would like to adjust, and willing to pay a lot more (in some cases). Nextcloud Office (a self-hosted replacement to Google Docs) For instance, you will not get a Google Docs like experience with Nextcloud Office (self-hosted) or CryptPad or Proton Docs, but they are manageable to some extent. So, you need to try them out and see if it fits your requirements. Next up is tough, a replacement for Google Photos. Yes, Ente is an excellent open source privacy-focused option. However, it is way pricier than what Google Photos costs for extra storage, as you get only 5 GB for free. Therefore, if you have numerous photos and videos, it will be an expensive switch. You can choose to self-host it, and explore other self-hosted Google Photos alternatives. But, of course, you will need to spend some time and money there as well. Self-hosted Open Source Alternatives to Google Photos Google Photos can be replaced using these open-source self-hosted photo applications. It's FOSSAnkush Das Finally, we need a replacement for Google Maps? As much as we could hate Google, I can never object to its usefulness and integration capabilities with every car out there. You can try open source alternatives like Organic Maps (for Android phones), and OpenStreetMap (web-based). However, it may not give you the same details and experience. So, choosing a document suite, map, and photo/video platform to replace Google services will be an inconvenient endeavor. In my case, I have tried to use CryptPad as much as possible. But, other alternatives haven't worked out well for me. Final ThoughtsWith the options mentioned above, I might have reduced my Google-centric usage by 70%, but it is not a 100% yet. I hope to reach that mark some day. I would love to know about your plans to do the same. Are you on the same boat with me? Have any other plans? Let me know in the comments below!
-
Chris’ Corner: JavaScript Ecosystem Tools
by: Chris Coyier Mon, 27 Jan 2025 17:10:10 +0000 I love a good exposé on how a front-end team operates. Like what technology they use, why, and how, particularly when there are pain points and journeys through them. Jim Simon of Reddit wrote one a bit ago about their teams build process. They were using something Rollup based and getting 2-minute build times and spent quite a bit of time and effort switching to Vite and now are getting sub-1-second build times. I don’t know if “wow Vite is fast” is the right read here though, as they lost type checking entirely. Vite means esbuild for TypeScript which just strips types, meaning no build process (locally, in CI, or otherwise) will catch errors. That seems like a massive deal to me as it opens the door to all contributions having TypeScript errors. I admit I’m fascinated by the approach though, it’s kinda like treating TypeScript as a local-only linter. Sure, VS Code complains and gives you red squiggles, but nothing else will, so use that information as you will. Very mixed feelings. Vite always seems to be front and center in conversations about the JavaScript ecosystem these days. The tooling section of this year’s JavaScript Rising Stars: (Interesting how it’s actually Biome that gained the most stars this year and has large goals about being the toolchain for the web, like Vite) Vite actually has the bucks now to make a real run at it. It’s always nail biting and fascinating to see money being thrown around at front-end open source, as a strong business model around all that is hard to find. Maybe there is an enterprise story to capture? Somehow I can see that more easily. I would guess that’s where the new venture vlt is seeing potential. npm, now being owned by Microsoft, certainly had a story there that investors probably liked to see, so maybe vlt can do it again but better. It’s the “you’ve got their data” thing that adds up to me. Not that I love it, I just get it. Vite might have your stack, but we write checks to infrastructure companies. That tinge of worry extends to Bun and Deno too. I think they can survive decently on momentum of developers being excited about the speed and features. I wouldn’t say I’ve got a full grasp on it, but I’ve seen some developers be pretty disillusioned or at least trepidatious with Deno and their package registry JSR. But Deno has products! They have enterprise consulting and various hosting. Data and product, I think that is all very smart. Mabe void(0) can find a product play in there. This all reminds me of XState / Stately which took a bit of funding, does open source, and productizes some of what they do. Their new Store library is getting lots of attention which is good for the gander. To be clear, I’m rooting for all of these companies. They are small and only lightly funded companies, just like CodePen, trying to make tools to make web development better. 💜
-
Revisiting CSS Multi-Column Layout
by: Andy Clarke Mon, 27 Jan 2025 15:35:44 +0000 Honestly, it’s difficult for me to come to terms with, but almost 20 years have passed since I wrote my first book, Transcending CSS. In it, I explained how and why to use what was the then-emerging Multi-Column Layout module. Hint: I published an updated version, Transcending CSS Revisited, which is free to read online. Perhaps because, before the web, I’d worked in print, I was over-excited at the prospect of dividing content into columns without needing extra markup purely there for presentation. I’ve used Multi-Column Layout regularly ever since. Yet, CSS Columns remains one of the most underused CSS layout tools. I wonder why that is? Holes in the specification For a long time, there were, and still are, plenty of holes in Multi-Column Layout. As Rachel Andrew — now a specification editor — noted in her article five years ago: She’s right. And that’s still true. You can’t style columns, for example, by alternating background colours using some sort of :nth-column() pseudo-class selector. You can add a column-rule between columns using border-style values like dashed, dotted, and solid, and who can forget those evergreen groove and ridge styles? But you can’t apply border-image values to a column-rule, which seems odd as they were introduced at roughly the same time. The Multi-Column Layout is imperfect, and there’s plenty I wish it could do in the future, but that doesn’t explain why most people ignore what it can do today. Patchy browser implementation for a long time Legacy browsers simply ignored the column properties they couldn’t process. But, when Multi-Column Layout was first launched, most designers and developers had yet to accept that websites needn’t look the same in every browser. Early on, support for Multi-Column Layout was patchy. However, browsers caught up over time, and although there are still discrepancies — especially in controlling content breaks — Multi-Column Layout has now been implemented widely. Yet, for some reason, many designers and developers I speak to feel that CSS Columns remain broken. Yes, there’s plenty that browser makers should do to improve their implementations, but that shouldn’t prevent people from using the solid parts today. Readability and usability with scrolling Maybe the main reason designers and developers haven’t embraced Multi-Column Layout as they have CSS Grid and Flexbox isn’t in the specification or its implementation but in its usability. Rachel pointed this out in her article: That’s true. No one would enjoy repeatedly scrolling up and down to read a long passage of content set in columns. She went on: But, let’s face it, thinking very carefully is what designers and developers should always be doing. Sure, if you’re dumb enough to dump a large amount of content into columns without thinking about its design, you’ll end up serving readers a poor experience. But why would you do that when headlines, images, and quotes can span columns and reset the column flow, instantly improving readability? Add to that container queries and newer unit values for text sizing, and there really isn’t a reason to avoid using Multi-Column Layout any longer. A brief refresher on properties and values Let’s run through a refresher. There are two ways to flow content into multiple columns; first, by defining the number of columns you need using the column-count property: CodePen Embed Fallback Second, and often best, is specifying the column width, leaving a browser to decide how many columns will fit along the inline axis. For example, I’m using column-width to specify that my columns are over 18rem. A browser creates as many 18rem columns as possible to fit and then shares any remaining space between them. CodePen Embed Fallback Then, there is the gutter (or column-gap) between columns, which you can specify using any length unit. I prefer using rem units to maintain the gutters’ relationship to the text size, but if your gutters need to be 1em, you can leave this out, as that’s a browser’s default gap. CodePen Embed Fallback The final column property is that divider (or column-rule) to the gutters, which adds visual separation between columns. Again, you can set a thickness and use border-style values like dashed, dotted, and solid. CodePen Embed Fallback These examples will be seen whenever you encounter a Multi-Column Layout tutorial, including CSS-Tricks’ own Almanac. The Multi-Column Layout syntax is one of the simplest in the suite of CSS layout tools, which is another reason why there are few reasons not to use it. Multi-Column Layout is even more relevant today When I wrote Transcending CSS and first explained the emerging Multi-Column Layout, there were no rem or viewport units, no :has() or other advanced selectors, no container queries, and no routine use of media queries because responsive design hadn’t been invented. We didn’t have calc() or clamp() for adjusting text sizes, and there was no CSS Grid or Flexible Box Layout for precise control over a layout. Now we do, and all these properties help to make Multi-Column Layout even more relevant today. Now, you can use rem or viewport units combined with calc() and clamp() to adapt the text size inside CSS Columns. You can use :has() to specify when columns are created, depending on the type of content they contain. Or you might use container queries to implement several columns only when a container is large enough to display them. Of course, you can also combine a Multi-Column Layout with CSS Grid or Flexible Box Layout for even more imaginative layout designs. Using Multi-Column Layout today Patty Meltt is an up-and-coming country music sensation. She’s not real, but the challenges of designing and developing websites like hers are. My challenge was to implement a flexible article layout without media queries which adapts not only to screen size but also whether or not a <figure> is present. To improve the readability of running text in what would potentially be too-long lines, it should be set in columns to narrow the measure. And, as a final touch, the text size should adapt to the width of the container, not the viewport. Article with no <figure> element. What would potentially be too-long lines of text are set in columns to improve readability by narrowing the measure. Article containing a <figure> element. No column text is needed for this narrower measure. The HTML for this layout is rudimentary. One <section>, one <main>, and one <figure> (or not:) <section> <main> <h1>About Patty</h1> <p>…</p> </main> <figure> <img> </figure> </section> I started by adding Multi-Column Layout styles to the <main> element using the column-width property to set the width of each column to 40ch (characters). The max-width and automatic inline margins reduce the content width and center it in the viewport: main { margin-inline: auto; max-width: 100ch; column-width: 40ch; column-gap: 3rem; column-rule: .5px solid #98838F; } Next, I applied a flexible box layout to the <section> only if it :has() a direct descendant which is a <figure>: section:has(> figure) { display: flex; flex-wrap: wrap; gap: 0 3rem; } This next min-width: min(100%, 30rem) — applied to both the <main> and <figure> — is a combination of the min-width property and the min() CSS function. The min() function allows you to specify two or more values, and a browser will choose the smallest value from them. This is incredibly useful for responsive layouts where you want to control the size of an element based on different conditions: section:has(> figure) main { flex: 1; margin-inline: 0; min-width: min(100%, 30rem); } section:has(> figure) figure { flex: 4; min-width: min(100%, 30rem); } What’s efficient about this implementation is that Multi-Column Layout styles are applied throughout, with no need for media queries to switch them on or off. Adjusting text size in relation to column width helps improve readability. This has only recently become easy to implement with the introduction of container queries, their associated values including cqi, cqw, cqmin, and cqmax. And the clamp() function. Fortunately, you don’t have to work out these text sizes manually as ClearLeft’s Utopia will do the job for you. My headlines and paragraph sizes are clamped to their minimum and maximum rem sizes and between them text is fluid depending on their container’s inline size: h1 { font-size: clamp(5.6526rem, 5.4068rem + 1.2288cqi, 6.3592rem); } h2 { font-size: clamp(1.9994rem, 1.9125rem + 0.4347cqi, 2.2493rem); } p { font-size: clamp(1rem, 0.9565rem + 0.2174cqi, 1.125rem); } So, to specify the <main> as the container on which those text sizes are based, I applied a container query for its inline size: main { container-type: inline-size; } Open the final result in a desktop browser, when you’re in front of one. It’s a flexible article layout without media queries which adapts to screen size and the presence of a <figure>. Multi-Column Layout sets text in columns to narrow the measure and the text size adapts to the width of its container, not the viewport. CodePen Embed Fallback Modern CSS is solving many prior problems Structure content with spanning elements which will restart the flow of columns and prevent people from scrolling long distances. Prevent figures from dividing their images and captions between columns. Almost every article I’ve ever read about Multi-Column Layout focuses on its flaws, especially usability. CSS-Tricks’ own Geoff Graham even mentioned the scrolling up and down issue when he asked, “When Do You Use CSS Columns?” Fortunately, the column-span property — which enables headlines, images, and quotes to span columns, resets the column flow, and instantly improves readability — now has solid support in browsers: h1, h2, blockquote { column-span: all; } But the solution to the scrolling up and down issue isn’t purely technical. It also requires content design. This means that content creators and designers must think carefully about the frequency and type of spanning elements, dividing a Multi-Column Layout into shallower sections, reducing the need to scroll and improving someone’s reading experience. Another prior problem was preventing headlines from becoming detached from their content and figures, dividing their images and captions between columns. Thankfully, the break-after property now also has widespread support, so orphaned images and captions are now a thing of the past: figure { break-after: column; } Open this final example in a desktop browser: CodePen Embed Fallback You should take a fresh look at Multi-Column Layout Multi-Column Layout isn’t a shiny new tool. In fact, it remains one of the most underused layout tools in CSS. It’s had, and still has, plenty of problems, but they haven’t reduced its usefulness or its ability to add an extra level of refinement to a product or website’s design. Whether you haven’t used Multi-Column Layout in a while or maybe have never tried it, now’s the time to take a fresh look at Multi-Column Layout. Revisiting CSS Multi-Column Layout originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
I Ran Deepseek R1 on Raspberry Pi 5 and No, it Wasn't 200 tokens/s
by: Abhishek Kumar Since the launch of DeepSeek AI, every tech media outlet has been losing its mind over it. It's been shattering records, breaking benchmarks, and becoming the go-to name in AI innovation. DeepSeek v/s OpenAI benchmark | Source: Brian Roemmele Recently, I stumbled upon a post on my X feed (don’t judge me, I’m moving to Bluesky soon!) where someone claimed to have run Deepseek on a Raspberry Pi at 200 tokens/second. My head started spinning. "wHaaaTTT?!" Naturally, I doom-scrolled the entire thread to make sense of it. Turns out, the guy used an AI accelerator module on top of the Pi to hit those numbers. But curiosity is a powerful motivator. Since I didn’t have an AI module lying around, I thought, why not test the raw performance of Deepseek on a plain Raspberry Pi 5? Who's stopping me? So, for this article, I installed Ollama on my Pi 5 (8 GB model) and downloaded Deepseek model with different parameters(i.e. 1.5B, 7B, 8B, and 14B parameters to be specific). 💡 If you're new or unsure about setting things up, don't worry, we already have a detailed guide on installing Ollama on a Raspberry Pi to help you get started. Here’s how each one performed: Deepseek 1.5B This model was snappy. It felt surprisingly responsive and handled paraphrasing tasks with ease. I didn’t encounter any hallucinations, making it a solid choice for day-to-day tasks like summarization and text generation. Performance statsTo test its capability further, I posed the question: What's the difference between Podman and Docker? The model gave a decent enough answer, clearly breaking down the differences between the two containerization tools. It highlighted how Podman is daemonless, while Docker relies on a daemon, and touched on security aspects like rootless operation. This response took about two minutes, and here’s how the performance data stacked up: total duration: 1m33.59302487s load duration: 44.322672ms prompt eval count: 13 token(s) prompt eval duration: 985ms prompt eval rate: 13.20 tokens/s eval count: 855 token(s) eval duration: 1m32.562s eval rate: 9.24 tokens/sDeepseek 7B The 7B model introduced a fair amount of hallucination. I tried writing a creative prompt asking for three haikus, but it started generating endless text, even asking itself questions! While amusing, it wasn’t exactly practical. For benchmarking purposes, I simplified my prompts, as seen in the video. Performance-wise, it was slower, but still functional. Performance statsTo test it further, I asked: What’s the difference between Docker Compose and Docker Run? The response was a blend of accurate and imprecise information. It correctly explained that Docker Compose is used to manage multi-container applications via a docker-compose.yml file, while Docker Run is typically for running single containers with specific flags. However, it soon spiraled into asking itself questions like, “But for a single app, say a simple Flask app on a single machine, Docker Run might be sufficient? Or is there another command or method?” Here’s how the performance data turned out: total duration: 4m20.665430872s load duration: 39.565944ms prompt eval count: 11 token(s) prompt eval duration: 3.256s prompt eval rate: 3.38 tokens/s eval count: 517 token(s) eval duration: 4m17.368s eval rate: 2.01 tokens/sDeepseek 8B This was the wild card. I didn’t expect the 8B model to run at all, considering how resource-hungry these models are. To my surprise, it worked! The performance was on par with the 7B model, neither fast nor particularly responsive, but hey, running an 8B model on a Raspberry Pi without extra hardware is a win in my book. Performance statsI tested it by asking, "Write an HTML boilerplate and CSS boilerplate." The model successfully generated a functional HTML and CSS boilerplate in a single code block, ensuring they were neatly paired. However, before jumping into the solution, the model explained its approach, what it was going to do and what else could be added. While this was informative, it felt unnecessary for a straightforward query. If I had crafted the prompt more precisely, the response might have been more direct (i.e. user error). Here’s the performance breakdown: total duration: 6m53.350371838s load duration: 44.410437ms prompt eval count: 13 token(s) prompt eval duration: 4.99s prompt eval rate: 2.61 tokens/s eval count: 826 token(s) eval duration: 6m48.314s eval rate: 2.02 tokens/sDeepseek 14B ? Unfortunately, this didn’t work. The 14B model required over 10 GB of RAM, which my 8 GB Pi couldn’t handle. After the success of the 8B model, my hopes were high, but alas, reality struck. ConclusionDeepSeek’s raw performance on the Raspberry Pi 5 showcases the growing potential of SBCs for AI workloads. The 1.5B model is a practical option for lightweight tasks, while the 7B and 8B models demonstrate the Pi’s ability to handle larger workloads, albeit slowly. I’m excited to test DeepSeek on the ArmSoM AIM7 with its 6 TOPS NPU. Its RK3588 SoC could unlock even better performance, and I’ll cover those results in a future article. If you’re interested in more of my experiments, check out this article where I ran 9 popular LLMs on the Raspberry Pi 5. Until then, happy tinkering, and remember: don’t ask AI to write haikus unless you want a never-ending saga. 😉
-
How to Update Kali Linux?
By: Janus Atienza Sun, 26 Jan 2025 16:06:55 +0000 Kali Linux is a Debian-based, open-source operating system that’s ideal for penetration testing, reverse engineering, security auditing, and computer forensics. It’s a rolling release model, as multiple updates of the OS are available in a year, offering you access to a pool of advanced tools that keep your software secure. But how to update Kali Linux to the latest version to avoid risks and compatibility issues? To help you in this regard, we are going to discuss the step-by-step process of updating Kali Linux and its benefits. Let’s begin! How to Update Kali Linux: Step-by-Step Guide Being hired to build smart solutions, a lot of custom IoT development professionals use Kali Linux for advanced penetration testing and even reverse engineering. However, it is important to keep it updated to avoid vulnerabilities. Before starting with how to update the Kali Linux process, you must have a stable internet connection and administrative rights. Here are the steps you can follow for this: Step 1: Check Resources List File The Linux Kali package manager fetches updates from the repository, so you first need to make sure that the system’s repository list is properly configured and aligned. Here’s how to check it: Open the terminal and run the following command to access the resources list file: http://kali.download/kali The output will display this list if your system is using the Kali Linux rolling release repository: deb http://kali.download/kali kali-rolling main contrib non-free non-free-firmware If the file is empty or has incorrect entries, you can edit it using editors like Nano or vim. Once you are sure that the list has only official and correct Kali Linux entries, save and close the editor. Step 2: Update the Package Information The next step is to update the package information using the repository list so the Kali Linux system knows about all the latest versions and updates available. The steps for that are: In the terminal, run this given command: sudo apt update This command updates the system’s package index to the latest repository information. You also see a list of packages being checked and their status (available for upgrade or not). Note: It only extracts the list of new packages and doesn’t install or update them! Step 3: Do a System Upgrade In how to update Kali Linux, the third step involves performing a system upgrade to install the latest versions and updates. Run the apt upgrade command to update all the installed packages to their latest version. However, unlike a full system upgrade, this command doesn’t remove or install any package from the system. You can then use the apt full-upgrade that upgrades all the packages and even add or remove some to keep your system running smoothly. The apt dist-upgrade is given when you want to handle package dependency changes, remove obsolete packages, and add new ones. Review all the changes that the commands have made and confirm the upgrade. Step 4: Get Rid of Unnecessary Packages Over time, useless files can accumulate in your system, taking up valuable disc space. You should get rid of them to declutter the system and also enhance the overall storage. Here are the steps for that: To remove the leftover packages, run the command: sudo apt autoremove -y Cached files also take up a lot of disc space, and you can remove them via the following command: sudo apt autoclean Step 5: Double-Check the Update Once you are all done installing the latest software, you should double-check to see if the system is actually running the upgrade. For this, give the command: cat /etc/os-release You can then see operating system information like version details and release date. Step 6: It’s Time to Reboot the System Well, this step is optional, but we suggest rebooting Kali Linux to ensure that the system is running the latest version and that all changes are fully applied. You can then perform tasks like security testing of custom IoT development processes. The command for this is: sudo reboot Why Update Kali Linux to the Latest Version? Software development and deployment trends are changing quickly. Now that you know how to update and upgrade Kali Linux, you must be wondering why you should update the system and what its impacts are. If so, here are some compelling reasons: Security Fixes and Patches Cybercrimes are quickly increasing, and statistics show that 43% of organizations lose existing customers because of cyber attacks. Additionally, individuals lose around $318 billion to cybercrime. However, when you update to the latest version of Kali Linux, there are advanced security fixes and patches. They remove all system vulnerabilities and make sure that professionals don’t fall victim to such malicious attempts. Access to New Tools and Features Kali Linux offers many features and tools like Metasploit, Nmap, and others, and they receive consistent updates from their developers. So, upgrading the OS assures that you are using the latest version of all pre-installed tools. You enjoy better functionality and improved system performance that make your daily tasks more efficient. For instance, the updated version of Nmap has fast scanning capabilities that pave the way for quick security auditing and troubleshooting. Compatibility with New Technologies Technology is evolving, and new protocols and software are introduced every day. The developers behind Kali Linux are well aware of these shifts. They are pushing regular updates that support these newer technologies for better system compatibility. Conclusion The process of how to update Kali Linux becomes easy if you are aware of the correct commands and understand the difference between upgrade options. Most importantly, don’t forget to reboot your system after a major update like Kernel to make sure that changes are configured properly. FAQs How often should I update Kali Linux? It’s advisable to update Kali Linux at least once a week or whenever there are new system updates. The purpose is to make sure that the system is secure and has all the latest features by receiving security patches and addressing all vulnerabilities. Can I update Kali Linux without using the terminal? No, you cannot update Kali Linux without using the terminal. To update the system, you can use the apt and apt-get commands. The steps involved in this process include checking the source file, updating the package repository, and upgrading the system. Is Kali Linux good for learning cyber security? Yes, Kali Linux is a good tool for learning cyber security. It has a range of tools for penetration testing, network security, analysis, and vulnerability scanning. The post How to Update Kali Linux? appeared first on Unixmen.
-
How AI is Revolutionizing Linux System Administration: Tools and Techniques for Automation
By: Janus Atienza Sun, 26 Jan 2025 00:06:01 +0000 AI-powered tools are changing the software development scene as we speak. AI assistants can not only help with coding, using advanced machine learning algorithms to improve their service, but they can also help with code refactoring, testing, and bug detection. Tools like GitHub, Tanbine, and Copilot aim to automate various processes, allowing developers more free time for other, more creative tasks. Of course, implementing AI tools takes time and careful risk assessment because various factors need to be taken into consideration. Let’s review some of the most popular automation tools available for Linux. Why Use AI-Powered Software Tools in Linux? AI is being widely used across various spheres of our lives with businesses utilizing the power of Artificial Intelligence to create new services and products. Even sites like Depositphotos have started offering AI services to create exclusive licensed photos that can be used anywhere – on websites, in advertising, design, and print media. Naturally, software development teams and Linux users have also started implementing AI-powered tools to improve their workflow. Here are some of the benefits of using such tools: An improved user experience. Fewer human errors in various processes. Automation of repetitive tasks boosts overall productivity. New features become available. Innovative problem-solving. Top AI Automation Tools for Linux Streamlining processes can greatly increase productivity, allowing developers and Linux users to delegate repetitive tasks to AI-powered software. They offer innovative solutions while optimizing different parts of the development process. Let’s review some of them. 1. GitHub Copilot Just a few years ago no one could’ve imagined that coding could be done by an AI algorithm. This AI-powered software can predict the completion of the code that’s being created, offering different functions and snippets on the go. GitHub Copilot can become an invaluable tool for both expert and novice coders. The algorithms can understand the code that’s being written using OpenAI’s Codex model. It supports various programming languages and can be easily integrated with the majority of IDEs. One of its key benefits is code suggestion based on the context of what’s being created. 2. DeepCode One of the biggest issues all developers face when writing code is potential bugs. This is where an AI-powered code review tool can come in handy. While it won’t help you create the code, it will look for vulnerabilities inside your project, giving context-based feedback and a variety of suggestions to fix the bugs found by the program. Thus, it can help developers improve the quality of their work. DeepCode uses machine learning to become a better help over time, offering improved suggestions as it learns more about the type of work done by the developer. This tool can easily integrate with GitLab, GitHub, and Bitbucket. 3. Tabnine Do you want an AI-powered tool that can actually learn from your coding style and offer suggestions based on it? Tabnine can do exactly that, predicting functions and offering snippets of code based on what you’re writing. It can be customized for a variety of needs and operations while supporting 30 programming languages. You can use this tool offline for improved security. 4. CircleCl This is a powerful continuous integration and continuous delivery platform that helps automate software development operations. It helps engineering teams build code easily, offering automatic tests at each stage of the process, whenever a change is implemented in the system. You can develop your app fast and easily with CirlceCL’s automated testing that involves mobile, serverless, API, web, and AI frameworks. This is the CI/CD expert who will help you significantly reduce testing time and build simple and stable systems. 5. Selenium This is one of the most popular testing tools used by developers all over the world. It’s compatible across various platforms, including Linux, due to the open-source nature of this framework. It offers a seamless process of generating and managing test cases, as well as compiling project reports. It can collaborate with continuous automated testing tools for better results. 6. Code Intelligence This is yet another tool capable of analyzing the source code to detect bugs and vulnerabilities without human supervision. It can find inconsistencies that are often missed by other testing methods, allowing the developing teams to resolve issues before the software is released. This tool works autonomously and simplifies root cause analysis. It utilizes self-learning AI capabilities to boost the testing process and swiftly pinpoints the line of code that contains the bug. 7. ONLYOFFICE Docs This open-source office suite allows real-time collaboration and offers a few interesting options when it comes to AI. You can install a plugin and get access to ChatGPT for free and use its features while creating a document. Some of the most handy ones include translation, spellcheck, grammar correction, word analysis, and text generation. You can also generate images for your documents and have a chat with ChatGPT while working on your project. Conclusion When it comes to the Linux operating system, there are numerous AI-powered automation tools you can try. A lot of them are used in software development to improve the code-writing process and allow developers to have more free time for other tasks. AI tools can utilize machine learning to provide you with better services while offering a variety of ways to streamline your workflow. Tools such as DeepCode, Tabnine, GitHub Copilot, and Selenium can look for solutions whenever you’re facing issues with your software. These programs will also offer snippets of code on the go while checking your project for bugs. The post How AI is Revolutionizing Linux System Administration: Tools and Techniques for Automation appeared first on Unixmen.
-
Mastering email encryption on Linux
By: Janus Atienza Sat, 25 Jan 2025 23:26:38 +0000 In today’s digital age, safeguarding your communication is paramount. Email encryption serves as a crucial tool to protect sensitive data from unauthorized access. Linux users, known for their preference for open-source solutions, must embrace encryption to ensure privacy and security. With increasing cyber threats, the need for secure email communications has never been more critical. Email encryption acts as a protective shield, ensuring that only intended recipients can read the content of your emails. For Linux users, employing encryption techniques not only enhances personal data protection but also aligns with the ethos of secure and open-source computing. This guide will walk you through the essentials of setting up email encryption on Linux and how you can integrate advanced solutions to bolster your security. Setting up email encryption on Linux Implementing email encryption on Linux can be straightforward with the right tools. Popular email clients like Thunderbird and Evolution support OpenPGP and S/MIME protocols for encrypting emails. Begin by installing GnuPG, an open-source software that provides cryptographic privacy and authentication. Once installed, generate a pair of keys—a public key to share with those you communicate with and a private key that remains confidential to you. Configure your chosen email client to use these keys for encrypting and decrypting emails. The interface typically offers user-friendly options to enable encryption settings directly within the email composition window. To further assist in this setup, many online tutorials offer detailed guides complete with screenshots to ease the process for beginners. Additionally, staying updated with the latest software versions is recommended to ensure optimal security features are in place. How email encryption works Email encryption is a process that transforms readable text into a scrambled format that can only be decoded by the intended recipient. It is essential for maintaining privacy and security in digital communications. As technology advances, so do the methods used by cybercriminals to intercept sensitive information. Thus, understanding the principles of email encryption becomes crucial. The basic principle of encryption involves using keys—a public key for encrypting emails and a private key for decrypting them. This ensures that even if emails are intercepted during transmission, they remain unreadable without the correct decryption key. Whether you’re using email services like Gmail or Outlook, integrating encryption can significantly reduce the risk of data breaches. Many email providers offer built-in encryption features, but for Linux users seeking more control, there are numerous open-source tools available. Email encryption from Trustifi provides an additional layer of security by incorporating advanced AI-powered solutions into your existing setup. Integrating advanced encryption solutions For those seeking enhanced security measures beyond standard practices, integrating solutions like Trustifi into your Linux-based email clients can be highly beneficial. Trustifi offers services such as inbound threat protection and outbound email encryption powered by AI technology. The integration process involves installing Trustifi’s plugin or API into your existing email infrastructure. This enables comprehensive protection against potential threats while ensuring that encrypted communications are seamless and efficient. With Trustifi’s advanced algorithms, businesses can rest assured that their communications are safeguarded against both current and emerging cyber threats. This approach not only protects sensitive data but also simplifies compliance with regulatory standards regarding data protection and privacy. Businesses leveraging such tools position themselves better in preventing data breaches and maintaining customer trust. Best practices for secure email communication Beyond technical setups, maintaining secure email practices is equally important. Start by using strong passwords that combine letters, numbers, and symbols; avoid easily guessed phrases or patterns. Enabling two-factor authentication adds another layer of security by requiring additional verification steps before accessing accounts. Regularly updating software helps protect against vulnerabilities that hackers might exploit. Many systems offer automatic updates; however, manually checking for updates can ensure no critical patches are missed. Staying informed about the latest security threats allows users to adapt their strategies accordingly. Ultimately, being proactive about security measures cultivates a safer digital environment for both personal and professional communications. Adopting these practices alongside robust encryption technologies ensures comprehensive protection against unauthorized access. The post Mastering email encryption on Linux appeared first on Unixmen.
-
Positioning Text Around Elements With CSS Offset
by: Preethi Fri, 24 Jan 2025 14:59:25 +0000 When it comes to positioning elements on a page, including text, there are many ways to go about it in CSS — the literal position property with corresponding inset-* properties, translate, margin, anchor() (limited browser support at the moment), and so forth. The offset property is another one that belongs in that list. The offset property is typically used for animating an element along a predetermined path. For instance, the square in the following example traverses a circular path: <div class="circle"> <div class="square"></div> </div> @property --p { syntax: '<percentage>'; inherits: false; initial-value: 0%; } .square { offset: top 50% right 50% circle(50%) var(--p); transition: --p 1s linear; /* Equivalent to: offset-position: top 50% right 50%; offset-path: circle(50%); offset-distance: var(--p); */ /* etc. */ } .circle:hover .square{ --p: 100%; } CodePen Embed Fallback A registered CSS custom property (--p) is used to set and animate the offset distance of the square element. The animation is possible because an element can be positioned at any point in a given path using offset. and maybe you didn’t know this, but offset is a shorthand property comprised of the following constituent properties: offset-position: The path’s starting point offset-path: The shape along which the element can be moved offset-distance: A distance along the path on which the element is moved offset-rotate: The rotation angle of an element relative to its anchor point and offset path offset-anchor: A position within the element that’s aligned to the path The offset-path property is the one that’s important to what we’re trying to achieve. It accepts a shape value — including SVG shapes or CSS shape functions — as well as reference boxes of the containing element to create the path. Reference boxes? Those are an element’s dimensions according to the CSS Box Model, including content-box, padding-box, border-box, as well as SVG contexts, such as the view-box, fill-box, and stroke-box. These simplify how we position elements along the edges of their containing elements. Here’s an example: all the small squares below are placed in the default top-left corner of their containing elements’ content-box. In contrast, the small circles are positioned along the top-right corner (25% into their containing elements’ square perimeter) of the content-box, border-box, and padding-box, respectively. <div class="big"> <div class="small circle"></div> <div class="small square"></div> <p>She sells sea shells by the seashore</p> </div> <div class="big"> <div class="small circle"></div> <div class="small square"></div> <p>She sells sea shells by the seashore</p> </div> <div class="big"> <div class="small circle"></div> <div class="small square"></div> <p>She sells sea shells by the seashore</p> </div> .small { /* etc. */ position: absolute; &.square { offset: content-box; border-radius: 4px; } &.circle { border-radius: 50%; } } .big { /* etc. */ contain: layout; /* (or position: relative) */ &:nth-of-type(1) { .circle { offset: content-box 25%; } } &:nth-of-type(2) { border: 20px solid rgb(170 232 251); .circle { offset: border-box 25%; } } &:nth-of-type(3) { padding: 20px; .circle { offset: padding-box 25%; } } } CodePen Embed Fallback Note: You can separate the element’s offset-positioned layout context if you don’t want to allocated space for it inside its containing parent element. That’s how I’ve approached it in the example above so that the paragraph text inside can sit flush against the edges. As a result, the offset positioned elements (small squares and circles) are given their own contexts using position: absolute, which removes them from the normal document flow. This method, positioning relative to reference boxes, makes it easy to place elements like notification dots and ornamental ribbon tips along the periphery of some UI module. It further simplifies the placement of texts along a containing block’s edges, as offset can also rotate elements along the path, thanks to offset-rotate. A simple example shows the date of an article placed at a block’s right edge: <article> <h1>The Irreplaceable Value of Human Decision-Making in the Age of AI</h1> <!-- paragraphs --> <div class="date">Published on 11<sup>th</sup> Dec</div> <cite>An excerpt from the HBR article</cite> </article> article { container-type: inline-size; /* etc. */ } .date { offset: padding-box 100cqw 90deg / left 0 bottom -10px; /* Equivalent to: offset-path: padding-box; offset-distance: 100cqw; (100% of the container element's width) offset-rotate: 90deg; offset-anchor: left 0 bottom -10px; */ } CodePen Embed Fallback As we just saw, using the offset property with a reference box path and container units is even more efficient — you can easily set the offset distance based on the containing element’s width or height. I’ll include a reference for learning more about container queries and container query units in the “Further Reading” section at the end of this article. There’s also the offset-anchor property that’s used in that last example. It provides the anchor for the element’s displacement and rotation — for instance, the 90 degree rotation in the example happens from the element’s bottom-left corner. The offset-anchor property can also be used to move the element either inward or outward from the reference box by adjusting inset-* values — for instance, the bottom -10px arguments pull the element’s bottom edge outwards from its containing element’s padding-box. This enhances the precision of placements, also demonstrated below. <figure> <div class="big">4</div> <div class="small">number four</div> </figure> .small { width: max-content; offset: content-box 90% -54deg / center -3rem; /* Equivalent to: offset-path: content-box; offset-distance: 90%; offset-rotate: -54deg; offset-anchor: center -3rem; */ font-size: 1.5rem; color: navy; } CodePen Embed Fallback As shown at the beginning of the article, offset positioning is animateable, which allows for dynamic design effects, like this: <article> <figure> <div class="small one">17<sup>th</sup> Jan. 2025</div> <span class="big">Seminar<br>on<br>Literature</span> <div class="small two">Tickets Available</div> </figure> </article> @property --d { syntax: "<percentage>"; inherits: false; initial-value: 0%; } .small { /* other style rules */ offset: content-box var(--d) 0deg / left center; /* Equivalent to: offset-path: content-box; offset-distance: var(--d); offset-rotate: 0deg; offset-anchor: left center; */ transition: --d .2s linear; &.one { --d: 2%; } &.two { --d: 70%; } } article:hover figure { .one { --d: 15%; } .two { --d: 80%; } } CodePen Embed Fallback Wrapping up Whether for graphic designs like text along borders, textual annotations, or even dynamic texts like error messaging, CSS offset is an easy-to-use option to achieve all of that. We can position the elements along the reference boxes of their containing parent elements, rotate them, and even add animation if needed. Further reading The CSS offset-path property: CSS-Tricks, MDN The CSS offset-anchor property: CSS-Tricks, MDN Container query length units: CSS-Tricks, MDN The @property at-rule: CSS-Tricks, web.dev The CSS Box Model: CSS-Tricks SVG Reference Boxes: W3C Positioning Text Around Elements With CSS Offset originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
Some Things You Might Not Know About Custom Counter Styles
by: Geoff Graham Thu, 23 Jan 2025 17:21:15 +0000 I was reading through Juan’s recent Almanac entry for the @counter-style at-rule and I’ll be darned if he didn’t uncover and unpack some extremely interesting things that we can do to style lists, notably the list marker. You’re probably already aware of the ::marker pseudo-element. You’ve more than likely dabbled with custom counters using counter-reset and counter-increment. Or maybe your way of doing things is to wipe out the list-style (careful when doing that!) and hand-roll a marker on the list item’s ::before pseudo. But have you toyed around with @counter-style? Turns out it does a lot of heavy lifting and opens up new ways of working with lists and list markers. You can style the marker of just one list item This is called a “fixed” system set to a specific item. @counter-style style-fourth-item { system: fixed 4; symbols: "💠"; suffix: " "; } li { list-style: style-fourth-item; } CodePen Embed Fallback You can assign characters to specific markers If you go with an “additive” system, then you can define which symbols belong to which list items. @counter-style dice { system: additive; additive-symbols: 6 "⚅", 5 "⚄", 4 "⚃", 3 "⚂", 2 "⚁", 1 "⚀"; suffix: " "; } li { list-style: dice; } CodePen Embed Fallback Notice how the system repeats once it reaches the end of the cycle and begins a new series based on the first item in the pattern. So, for example, there are six sides to typical dice and we start rolling two dice on the seventh list item, totaling seven. You can add a prefix and suffix to list markers A long while back, Chris showed off a way to insert punctuation at the end of a list marker using the list item’s ::before pseudo: ol { list-style: none; counter-reset: my-awesome-counter; li { counter-increment: my-awesome-counter; &::before { content: counter(my-awesome-counter) ") "; } } } That’s much easier these days with @counter-styles: @counter-style parentheses { system: extends decimal; prefix: "("; suffix: ") "; } CodePen Embed Fallback You can style multiple ranges of list items Let’s say you have a list of 10 items but you only want to style items 1-3. We can set a range for that: @counter-style single-range { system: extends upper-roman; suffix: "."; range: 1 3; } li { list-style: single-range; } CodePen Embed Fallback We can even extend our own dice example from earlier: @counter-style dice { system: additive; additive-symbols: 6 "⚅", 5 "⚄", 4 "⚃", 3 "⚂", 2 "⚁", 1 "⚀"; suffix: " "; } @counter-style single-range { system: extends dice; suffix: "."; range: 1 3; } li { list-style: single-range; } CodePen Embed Fallback Another way to do that is to use the infinite keyword as the first value: @counter-style dice { system: additive; additive-symbols: 6 "⚅", 5 "⚄", 4 "⚃", 3 "⚂", 2 "⚁", 1 "⚀"; suffix: " "; } @counter-style single-range { system: extends dice; suffix: "."; range: infinite 3; } li { list-style: single-range; } Speaking of infinite, you can set it as the second value and it will count up infinitely for as many list items as you have. Maybe you want to style two ranges at a time and include items 6-9. I’m not sure why the heck you’d want to do that but I’m sure you (or your HIPPO) have got good reasons. @counter-style dice { system: additive; additive-symbols: 6 "⚅", 5 "⚄", 4 "⚃", 3 "⚂", 2 "⚁", 1 "⚀"; suffix: " "; } @counter-style multiple-ranges { system: extends dice; suffix: "."; range: 1 3, 6 9; } li { list-style: multiple-ranges; } CodePen Embed Fallback You can add padding around the list markers You ever run into a situation where your list markers are unevenly aligned? That usually happens when going from, say, a single digit to a double-digit. You can pad the marker with extra characters to line things up. /* adds leading zeroes to list item markers */ @counter-style zero-padded-example { system: extends decimal; pad: 3 "0"; } Now the markers will always be aligned… well, up to 999 items. CodePen Embed Fallback That’s it! I just thought those were some pretty interesting ways to work with list markers in CSS that run counter (get it?!) to how I’ve traditionally approached this sort of thing. And with @counter-style becoming Baseline “newly available” in September 2023, it’s well-supported in browsers. Some Things You Might Not Know About Custom Counter Styles originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
Automatically Update Your Docker Containers with Watchtower
by: Abhishek Kumar Thu, 23 Jan 2025 11:22:15 +0530 Imagine this: You’ve deployed a handful of Docker containers to power your favorite applications, maybe a self-hosted Nextcloud for your files, a Pi-hole for ad-blocking, or even a media server like Jellyfin. Everything is running like a charm, but then you hit a common snag: keeping those containers updated. When a new image is released, you’ll need to manually pull it, stop the running container, recreate it with the updated image, and hope everything works as expected. Multiply that by the number of containers you’re running, and it’s clear how this quickly becomes a tedious and time-consuming chore. But there’s more at stake than just convenience. Skipping updates or delaying them for too long can lead to outdated software running in your containers, which often means unpatched vulnerabilities. These can become a serious security risk, especially if you’re hosting services exposed to the internet. This is where Watchtower steps in, a tool designed to take the hassle out of container updates by automating the entire process. Whether you’re running a homelab or managing a production environment, Watchtower ensures your containers are always up-to-date and secure, all with minimal effort on your part. What is Watchtower?Watchtower is an open-source tool that automatically monitors your Docker containers and updates them whenever a new version of their image is available. It keeps your setup up-to-date, saving time and reducing the risk of running outdated containers. But it’s not just a "set it and forget it" solution, it’s also highly customizable, allowing you to tailor its behavior to fit your workflow. Whether you prefer full automation or staying in control of updates, Watchtower has you covered. How does it work?Watchtower works by periodically checking for updates to the images of your running containers. When it detects a newer version, it pulls the updated image, stops the current container, and starts a new one using the updated image. The best part? It maintains your existing container configuration, including port bindings, volume mounts, and environment variables. If your containers depend on each other, Watchtower handles the update process in the correct order to avoid downtime. Deploying watchtowerSince you’re reading this article, I’ll assume you already have some sort of homelab or Docker setup where you want to automate container updates. That means I won’t be covering Docker installation here. When it comes to deploying Watchtower, it can be done in two ways: Docker runIf you’re just trying it out or want a straightforward deployment, you can run the following command: docker run -d --name watchtower -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower This will spin up a Watchtower container that monitors your running containers and updates them automatically. But here’s the thing, I’m not a fan of the docker run command. It’s quick, sure, but I prefer stack approach rather than cramming everything into a single command. Docker composeIf you facny using Docker Compose to run Watchtower, here’s a minimal configuration that replicates the docker run command above: version: "3.8" services: watchtower: image: containrrr/watchtower container_name: watchtower restart: always volumes: - /var/run/docker.sock:/var/run/docker.sock To start Watchtower using this configuration, save it as docker-compose.yml and run: docker-compose up -d This will give you the same functionality as the docker run command, but in a cleaner, more manageable format. Customizing watchtower with environment variablesRunning Watchtower plainly is all good, but we can make it even better with environment variables and command arguments. Personally, I don’t like giving full autonomy to one service to automatically make changes on my behalf. Since I have a pretty decent homelab running crucial containers, I prefer using Watchtower to notify me about updates rather than updating everything automatically. This ensures that I remain in control, especially for containers that are finicky or require a perfect pairing with their databases. Sneak peak into my homelabTake a look at my homelab setup: it’s mostly CMS containers for myself and for clients, and some of them can behave unpredictably if not updated carefully. So instead of letting Watchtower update everything, I configure it to provide insights and alerts, and then I manually decide which updates to apply. To achieve this, we’ll add the following environment variables to our Docker Compose file: .tg {border-collapse:collapse;border-spacing:0;} .tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px; overflow:hidden;padding:10px 5px;word-break:normal;} .tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px; font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;} .tg .tg-0pky{border-color:inherit;text-align:left;vertical-align:top} Environment Variable Description WATCHTOWER_CLEANUP Removes old images after updates, keeping your Docker host clean. WATCHTOWER_POLL_INTERVAL Sets how often Watchtower checks for updates (in seconds). One hour (3600 seconds) is a good balance. WATCHTOWER_LABEL_ENABLE Updates only containers with specific labels, giving you granular control. WATCHTOWER_DEBUG Enables detailed logs, which can be invaluable for troubleshooting. WATCHTOWER_NOTIFICATIONS Configures the notification method (e.g., email) to keep you informed about updates. WATCHTOWER_NOTIFICATION_EMAIL_FROM The email address from which notifications will be sent. WATCHTOWER_NOTIFICATION_EMAIL_TO The recipient email address for update notifications. WATCHTOWER_NOTIFICATION_EMAIL_SERVER SMTP server address for sending notifications. WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PORT Port used by the SMTP server (commonly 587 for TLS). WATCHTOWER_NOTIFICATION_EMAIL_USERNAME SMTP server username for authentication. WATCHTOWER_NOTIFICATION_EMAIL_PASSWORD SMTP server password for authentication. Here’s how the updated docker-compose.yml file would look: version: "3.8" services: watchtower: image: containrrr/watchtower container_name: watchtower restart: always environment: WATCHTOWER_CLEANUP: "true" WATCHTOWER_POLL_INTERVAL: "3600" WATCHTOWER_LABEL_ENABLE: "true" WATCHTOWER_DEBUG: "true" WATCHTOWER_NOTIFICATIONS: "email" WATCHTOWER_NOTIFICATION_EMAIL_FROM: "admin@example.com" WATCHTOWER_NOTIFICATION_EMAIL_TO: "notify@example.com" WATCHTOWER_NOTIFICATION_EMAIL_SERVER: "smtp.example.com" WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PORT: "587" WATCHTOWER_NOTIFICATION_EMAIL_USERNAME: "your_email_username" WATCHTOWER_NOTIFICATION_EMAIL_PASSWORD: "your_email_password" volumes: - /var/run/docker.sock:/var/run/docker.sock I like to put my credentials in a separate environment file.Once you run the Watchtower container for the first time, you'll receive an initial email confirming that the service is up and running. Here's an example of what that email might look like: After some time, as Watchtower analyzes your setup and scans the running containers, it will notify you if it detects any updates available for your containers. These notifications are sent in real-time and look something like this: This feature ensures you're always in the loop about potential updates without having to check manually. Final thoughts I’m really impressed by Watchtower and have been using it for a month now. I recommend, if possible, to play around with it in an isolated environment first, that’s what I did before deploying it in my homelab. The email notification feature is great, but my inbox now looks totally filled with Watchtower emails, so I might create a rule to manage them better. Overall, no complaints so far! I find it better than the Docker Compose method we discussed earlier. Updating Docker Containers With Zero DowntimeA step by step methodology that can be very helpful in your day to day DevOps activities without sacrificing invaluable uptime.Linux HandbookAvimanyu BandyopadhyayWhat about you? What do you use to update your containers? If you’ve tried Watchtower, share your experience, anything I should be mindful of? Let us know in the comments!
-
FOSS Weekly #25.04: Must-know Jargon, Kernel 6.13 Released, Mint 22.1, WINE 10 and More Linux Stuff
by: Abhishek Prakash I would appreciate your feedback on something 'new'. I plan to add pages that let you discover applications based on certain criteria. It's a work in progress, but feel free to have a look and share your opinion 🙏 Discover Interesting Linux Terminal Tools Discover a selection of interesting tools and utilities you can use from the (dis)comfort of your terminal. It's FOSSAbhishek Prakash Would you like to see more pages like this? 💬 Let's see what else you get in this edition Upgrading to Mint 22.1. Flathub introducing a new section. Linux powering NVIDIA's Project DIGITS. And other Linux news, videos and, of course, memes! This edition of FOSS Weekly is supported by ANY.RUN. ✨ANY.RUN: Business-Ready Interactive Linux Malware AnalysisDon’t let your business be the next target of emerging Linux malware. Strengthen your critical operations with ANY.RUN’s Interactive Sandbox, designed for proactive malware hunting and analysis. Here’s how it helps businesses stay safe: Quick setup gets you started instantly Real-time detection spots threats fast Unlimited sessions let you analyze all your files Risk-free interaction keeps your system safe Detailed reports boost your defenses User-friendly design simplifies analysis Cloud access works anywhere, anytime. Protect your business and gain peace of mind with the ultimate malware analysis platform trusted by companies worldwide. Try it for FREE! Interactive Online Malware Analysis Sandbox - ANY.RUN Cloud-based malware analysis service. Take your information security to the next level. Analyze suspicious and malicious activities using our innovative tools. ANY.RUN 📰 Linux and Open Source NewsRhino Linux 2025.1 is now available. Intel's Tofino P4 has been open sourced. Flathub has made it easy to find mobile apps on the platform. NVIDIA Project DIGITS is a supercomputer that is powered by Linux Upgrading to Mint 22.1 from Mint 22. WINE 10.0 is now available. Linux kernel 6.13 is a major upgrade with many changes. Linux Kernel 6.13 Released: Here’s What’s New! AMD users and old Apple device owners, this is a good release for you! It's FOSS NewsSourav Rudra 🧠 What We’re Thinking AboutThis is not how you 'force implement' cloud features. Bambu Lab Firmware Fiasco Has Caused Rifts In The 3D Printing Community Bambu Labs has found themselves in a tough spot. It's FOSS NewsSourav Rudra Hindenburg Research closing up shop has resulted in an interesting outcome. 🧮 Linux Tips, Tutorials and MoreHyprland series sees the addition of two new tutorials; grouping items in Waybar and screenshots utilities in Hyprland. WSL tutorial series has also been reorganized. Spice up your Kodi setup with these stunning builds! Not the true measure, but this trick lets you have some ideas about how long it takes to boot your Linux system. Blend in with the crowd with these jargons. 21 Jargon Every Linux User Should Know Even if you don’t know Linux well enough, you should know these common terms to blend in ;) It's FOSSAnkush Das 👷 Maker's and AI CornerTinker right out of the box with these 7 Raspberry Pi laptops and tablets. 7 Raspberry Pi-Based Laptops and Tablets for Tinkerers Raspberry-powered laptops and tablets can be the perfect pick for your projects, and portable computing needs. Here’s a list of options. It's FOSSAnkush Das Tip on monitoring the CPU and GPU temperature of your Raspberry Pi. 📹 Videos we are creatingTake a look at the new features in the new Mint 22.1 Xia. Subscribe to our YouTube channel ✨ Apps of the WeekText Pieces is a neat app that acts as a scratchpad for developers. Text Pieces: A Rust-based Open Source App to Help Devs With Text Transformations A handy little scratchpad app for developers. It's FOSS NewsSourav Rudra Managing your finances is crucial, My Expenses makes it simple. 🛍️Deal You Would Love If you are looking for a career in IT with Linux, Cloud and DevOps, Linux Foundation's training and certifications (LFCS, CKA etc) are on a limited time discount. Check Linux Foundation Offer 🧩 Quiz TimeThis time it's a crossword on Linux terminal emulators. Linux Terminal Emulators: Crossword Let’s explore some Linux terminal emulators with this crossword. It's FOSSAnkush Das 💡 Quick Handy TipIn KDE Plasma, you can add some additional effects to the Breeze window decorations using system settings. First, go to System Settings, then go to Colors & Themes → Window Decorations. Here, click on the edit icon near the Breeze theme to get more options, where a new settings dialog should appear. You can do appearance changes like title alignment, shadows and outline, button size tweaks, etc. This might work with other Plasma themes too. 🤣 Meme of the WeekDo you also feel the same way? 🗓️ Tech TriviaThe Macintosh, launched on January 24, 1984, was the first successful computer with a mouse and graphical user interface. It revolutionized personal computing by making technology more accessible. Its debut was marked by Apple’s iconic “1984” Super Bowl commercial. 🧑🤝🧑 FOSSverse CornerPro FOSSer, Jimmy, shares how he is impressed by Atuin. Atuin Rocks!: Fuzzy Search for Bash History Hey everyone, Around mid November I decided to try Atuin: Github link. Atuin replaces your shell history with an SQLite database. You can also set it up so that your fully encrypted history is synchronized between different machines*. Before this, while I used the shell history, I cannot say that I was very good at it. If it had been a long time since I had run the command, I was usually better off just trying to figure out what I ran before by typing it out. Of course, sometimes I would have… It's FOSS CommunityAkatama ❤️ With loveShare it with your Linux-using friends and encourage them to subscribe (hint: it's here). Share the articles in Linux Subreddits and community forums. Follow us on Google News and stay updated in your News feed. Opt for It's FOSS Plus membership and support us 🙏 Enjoy FOSS 😄
-
21 Jargon Every Linux User Should Know
by: Ankush Das Whether you are a Windows/macOS user, or someone who is new to computers, Linux often comes as a challenge to everyone when they encounter unfamiliar terms. You do not usually come across Linux-specific jargon in standard high school academic computer books, unless there is a dedicated chapter for Linux. So, for the majority of users who never used Linux, the terms associated will sound alien to them. With this article, I aim to change that by explaining some of the important jargon that should help you navigate the Linux world better. 1. KernelThe core of an operating system that interacts with the hardware, and software to help you take control of it, is a kernel. And, Linux is just a kernel. We have an article that explains what Linux is for more details. Every operating system is built on top of a kernel, like the Windows NT kernel for Windows, and the XNU kernel for Apple's macOS. 2. DistroA distro (short for distribution) is a complete operating system package built on top of a Linux kernel. There can be 100s of Linux distros. Each of them can differ in terms of its desktop environment, package manager, software pre-installed, user interface, Linux kernel version, and its use-cases. The combination of such system components that you like should be your preferred Linux distro. Some of the examples of a distro include Ubuntu, Fedora, Arch Linux, and Linux Mint. Furthermore, there are distros that are based on other existing distros. For instance, Linux Mint is based on Ubuntu, and Ubuntu is based on Debian. It can be confusing to choose a distro based on what's available. So, I recommend going through the list of the best Linux distributions for all kinds of users. Best Linux Distributions For Everyone in 2025 Looking for the best Linux distribution that suits everyone? Take a look at our comprehensive list. It's FOSSAnkush Das 3. Dual BootingThe technique of having two operating systems installed in a single computer is called dual booting. You can decide to use either of them, whether you have two Linux distros or one Linux and Windows operating system. If you are considering doing it, you must know about the dual booting myths before proceeding. Don’t Believe These Dual Boot Myths Don’t listen to what you hear. I tell you the reality from my dual booting experience. It's FOSSAnkush Das 4. GrubGrub is the boot manager program (or bootloader) that lists the operating systems installed on your computer. You can find it on most popular Linux distributions, with some exceptions like Pop!_OS. If you didn't know, a bootloader is a program that starts when you boot up the computer and loads the kernel to execute. You get to customize the order of it, and also customize the look of it to some extent. You can learn more about grub in our jargon buster article. What is Grub in Linux? What is it Used for? If you ever used a desktop Linux system, you must have seen this screen. This is called the GRUB screen. Learn what is GRUB in Linux and what is it used for? It's FOSSAbhishek Prakash 5. Desktop EnvironmentThe desktop environment is a component of a Linux distribution that provides a graphical user interface (GUI) to interact with all the tech. It includes elements like icons, toolbar, wallpaper, widgets, and more. You can get a detailed explanation of what a desktop environment is in our article and explore all the available desktop environments here: 8 Best Desktop Environments For Linux A list of the best Linux Desktop Environments with their pros and cons. Have a look and see which desktop environment you should use. It's FOSSAnkush Das My favorite desktop environments include GNOME and KDE Plasma. 6. Display serverDisplay server is the core tech of what enables you to see and have a graphical user interface (GUI). Without it, you will not have a GUI to interact with. It is not the same as desktop environment. In fact, a desktop environment includes a display server under-the-hood to make things possible. You might have heard about X11, and Wayland sessions, those are the types of display servers available. Explore more here: Linux Jargon Buster: What is a Display Server in Linux? What is it Used for? In Linux related articles, news and discussions, you’ll often come across the term display server, Xorg, Wayland etc. In this explainer article, I’ll discuss display servers in Linux. What is display server in Linux? A display server is a program which is responsible for the input and output It's FOSSDimitrios 7. Display ManagerThe display manager is a program that gives login capabilities to the user in a desktop environment. Some popular display managers are GDM, LightDM, and SSDM. You can learn more about it here: Linux Jargon Buster: What is Display Manager in Linux? In this chapter of the Linux Jargon Buster, you’ll learn about display manager in Linux. Is it part of the desktop environment? What does it do? It's FOSSAbhishek Prakash 8. GNOME ShellThe user interface component in the GNOME desktop environment that is responsible to managing actions like window switching, notifications, and launching applications is the GNOME shell. You can customize the behavior and add more functionalities to it using GNOME shell extensions. 9. Terminal EmulatorThe terminal emulator is a text-based program that lets you type in commands for the computer to process. Some may even prefer to all it the command-line interface (just like the command prompt in Windows). By default, every Linux distribution offers a terminal emulator with a set of capabilities. However, you can choose to install a separate one to get more functionalities or a different look/feel. You can explore our list of available Linux terminal emulators to try out some cool options. 10. SudoSudo is a command on Linux that gives you elevated privileges (or root privileges) temporarily. It is used whenever you want to make a system modification, or want to simply access a system file. The user is asked to prove that they are the administrators of the computer by typing in the password whenever sudo is used in a command. Interestingly, the password is not visible when you type it in the terminal for security purposes. 11. Package ManagerA tool that lets you install, manage, and remove applications on your Linux distro, is the package manager. It can be terminal-centric or one with a graphical user interface (GUI). For instance, APT package manager for .deb files is terminal-focused. And, Synaptic is a GUI-based tool. Every Linux distro has a different package manager. However, some package managers are predominantly found in most of the Linux distributions. For more information, you can check out our package manager explainer: What is a Package Manager in Linux? Learn about packaging system and package managers in Linux. You’ll learn how do they work and what kind of package managers available. It's FOSSAbhishek Prakash 12. End of LifeEnd of Life (EOL) is a term used to point out the particular date/year after which a software will stop receiving any maintenance or security updates. In our context, it can be a Linux distribution. However, it is a term used for all kinds of software. For instance, the End of Life for Ubuntu 24.04 LTS is April 2029. The End of Life differs based on the release cycle of the distribution, which I shall mention in the next point. Suggested Read 📖 What is End of Life in Ubuntu? Everything You Should Know About it Learn what is end of life of an Ubuntu release, how it impacts you, how to check support status and what you should do if your system reaches end of life. It's FOSSAbhishek Prakash 13. Long-Term Support (LTS) and Non-LTS ReleaseA release cycle is the period when you can expect a software to get a new upgrade while marking the end of life of the current version. If you find something that mentions — Long-Term Support (LTS) release, it means that the software will get updates for a long duration of time, focusing on its stability over bleeding-edge changes. Depending on the software or the distro, the duration will differ. For instance, every LTS release of Ubuntu gets at least five years of updates, and its flavours get only three years of updates. And, non-LTS is the opposite of it, meaning, the software will get updates for a shorter duration (or limited time). For instance, Ubuntu 24.10 will be supported for only nine months. 14. Point and Rolling ReleaseA point release is a minor update to a major version of the software. For instance, Linux Mint 22.1 is a point update to Linux Mint 22. On the contrary, a rolling release does not increment in any similar form. It just gets updates, small or big, with every new push by the developer team after its initial big release. For instance, Arch Linux is one of the best rolling release distros. What is a Rolling Release Distribution? What is rolling release? What is a rolling release distribution? How is it different from the point release distributions? All your questions answered. It's FOSSAbhishek Prakash 15. Snap, Flatpak, and AppImageSnap, Flatpak, and, AppImage are three different universal packaging formats for Linux software. Unlike DEB or RPM packages, you can use Snap/Flatpak/AppImage packages on any Linux distributions. Technically, they have certain differences among each other, but they serve a similar aim, to make things cross-distribution friendly and remove the hassle of dependencies. Suggested Read 📖 Flatpak vs. Snap: 10 Differences You Should Know Flatpak vs Snap, know the differences and gain insights as a Linux user to pick the best. It's FOSSAnkush Das 16. Tiling Window ManagerTiling Window Manager is a program that lets you organize your windows in a tile layout. It is a mighty utility to make the best use of your screen space while keeping things organized. It boosts your productivity, and also makes your desktop experience prettier. Suggested Read 📖 Explained: What is a Tiling Window Manager in Linux? Learn what a tiling window manager is, and the benefits that come along with it. It's FOSSAnkush Das 17. Upstream and DownstreamIn terms of Linux software lingo, upstream is often referred to an original project from which the current software is based on. It can be a kernel, or a distro, or an app in our context. And, the downstream is the one that takes things from the upstream. For instance, the Linux kernel releases are upstream, and the distro developers customizing it and using it will be termed as downstream. You can learn more in our article here: Linux Jargon Buster: What are Upstream and Downstream? The terms: upstream and downstream are rather ambiguous terms and, I think, not really used by the general public. If you are a Linux user and do not write or maintain software, chances are pretty good that these terms will mean nothing to you, but they can be instructive in It's FOSSBill Dyer 18. DaemonA daemon is a utility program that runs in the background to make sure certain services are running and monitored. For instance, the system update daemon makes sure to check for updates at a regular interval of time. Get more insights on this on our article here: What are Daemons in Linux? Why are They Used? You’ll often come across the term daemon while using Linux. Don’t be scared. Learn what are daemons in Linux & why they are used in UNIX-like operating systems. It's FOSSBill Dyer 19. TTYWhen it comes to Linux, TTY is an abstract device in UNIX and Linux. Sometimes it refers to a physical input device such as a serial port, and sometimes it refers to a virtual TTY where it allows users to interact with the system (reference). What is TTY in Linux? Sooner or later, you’ll come across the term tty while using Linux. Learn what it is and what is its significance. It's FOSSAnkush Das 20. Immutable DistroConsidering you already know what a distro is, an immutable distro is just a type of distro where you cannot modify the core of the operating system (in other words, it is read-only). This makes it a safer experience, and a more reliable one. Immutable distros have gained popularity recently, and you can find plenty of immutable distros to try the concept for yourself. 12 Future-Proof Immutable Linux Distributions Immutability is a concept in trend. Take a look at what are the options you have for an immutable Linux distribution. It's FOSSAnkush Das 21. Super KeyThe Windows key that you normally know and love is the super key for Linux. It acts as the command button (like macOS) with which you can perform a range of keyboard shortcuts. So, if someone says press the super key, it is just the Windows key on most keyboards. In some rare instance, the keyboard button could have a Linux icon over a Windows one. What is the Super Key in Ubuntu Linux? Get familiar with the super (or is it meta) key in Linux in this chapter of the Jargon Buster series. It's FOSSSagar Sharma ConclusionIt helps to know the common technical terms, specially if you are in discussion on online forums. Of course, there is no end to jargon. There are many more that didn't make to this list. There will be newer ones as we progress with time. What are your favorites Linux jargon that you learned recently? Share it with us in the comments 😄
-
Creating a “Starred” Feed
by: Geoff Graham Tue, 21 Jan 2025 14:21:32 +0000 Chris wrote about “Likes” pages a long while back. The idea is rather simple: “Like” an item in your RSS reader and display it in a feed of other liked items. The little example Chris made is still really good. CodePen Embed Fallback There were two things Chris noted at the time. One was that he used a public CORS proxy that he wouldn’t use in a production environment. Good idea to nix that, security and all. The other was that he’d consider using WordPress transients to fetch and cache the data to work around CORS. I decided to do that! The result is this WordPress block I can drop right in here. I’ll plop it in a <details> to keep things brief. Open Starred Feed Link on 1/16/2025Don’t Wrap Figure in a Linkadrianroselli.com In my post Brief Note on Figure and Figcaption Support I demonstrate how, when encountering a figure with a screen reader, you won’t hear everything announced at once: No screen reader combo treats the caption as the accessible name nor accessible description, not even for an…Link on 1/15/2025Learning HTML is the best investment I ever didchristianheilmann.com One of the running jokes and/or discussion I am sick and tired of is people belittling HTML. Yes, HTML is not a programming language. No, HTML should not just be a compilation target. Learning HTML is a solid investment and not hard to do. I am not…Link on 1/14/2025Open Props UInerdy.dev Presenting Open Props UI!…Link on 1/12/2025Gotchas in Naming CSS View Transitionsblog.jim-nielsen.comI’m playing with making cross-document view transitions work on this blog. Nothing fancy. Mostly copying how Dave Rupert does it on his site where you get a cross-fade animation on the whole page generally, and a little position animation on the page title specifically. $item['title'], 'link' => $item['link'], 'pubDate' => $item['pubDate'], 'description' => $item['description'], ]; } set_transient($transient_key, $items, 12 * HOUR_IN_SECONDS); return new WP_REST_Response($items, 200); } add_action('rest_api_init', function () { register_rest_route('custom/v1', '/fetch-data', [ 'methods' => 'GET', 'callback' => 'fetch_and_store_data', ]); }); Could this be refactored and written more efficiently? All signs point to yes. But here’s how I grokked it: function fetch_and_store_data() { } The function’s name can be anything. Naming is hard. The first two variables: $transient_key = 'fetched_data'; $cached_data = get_transient($transient_key); The $transient_key is simply a name that identifies the transient when we set it and get it. In fact, the $cached_data is the getter so that part’s done. Check! I only want the $cached_data if it exists, so there’s a check for that: if ($cached_data) { return new WP_REST_Response($cached_data, 200); } This also establishes a new response from the WordPress REST API, which is where the data is cached. Rather than pull the data directly from Feedbin, I’m pulling it and caching it in the REST API. This way, CORS is no longer an issue being that the starred items are now locally stored on my own domain. That’s where the wp_remote_get() function comes in to form that response from Feedbin as the origin: $response = wp_remote_get('https://feedbin.com/starred/a22c4101980b055d688e90512b083e8d.xml'); Similarly, I decided to throw an error if there’s no $response. That means there’s no freshly $cached_data and that’s something I want to know right away. if (is_wp_error($response)) { return new WP_REST_Response('Error fetching data', 500); } The bulk of the work is merely parsing the XML data I get back from Feedbin to JSON. This scours the XML and loops through each item to get its title, link, publish date, and description: $body = wp_remote_retrieve_body($response); $data = simplexml_load_string($body, 'SimpleXMLElement', LIBXML_NOCDATA); $json_data = json_encode($data); $array_data = json_decode($json_data, true); $items = []; foreach ($array_data['channel']['item'] as $item) { $items[] = [ 'title' => $item['title'], 'link' => $item['link'], 'pubDate' => $item['pubDate'], 'description' => $item['description'], ]; } “Description” is a loaded term. It could be the full body of a post or an excerpt — we don’t know until we get it! So, I’m splicing and trimming it in the block’s Edit component to stub it at no more than 50 words. There’s a little risk there because I’m rendering the HTML I get back from the API. Security, yes. But there’s also the chance I render an open tag without its closing counterpart, muffing up my layout. I know there are libraries to address that but I’m keeping things simple for now. Now it’s time to set the transient once things have been fetched and parsed: set_transient($transient_key, $items, 12 * HOUR_IN_SECONDS); The WordPress docs are great at explaining the set_transient() function. It takes three arguments, the first being the $transient_key that was named earlier to identify which transient is getting set. The other two: $value: This is the object we’re storing in the named transient. That’s the $items object handling all the parsing. $expiration: How long should this transient last? It wouldn’t be transient if it lingered around forever, so we set an amount of time expressed in seconds. Mine lingers for 12 hours before it expires and then updates the next time a visitor hits the page. OK, time to return the items from the REST API as a new response: return new WP_REST_Response($items, 200); That’s it! Well, at least for setting and getting the transient. The next thing I realized I needed was a custom REST API endpoint to call the data. I really had to lean on the WordPress docs to get this going: add_action('rest_api_init', function () { register_rest_route('custom/v1', '/fetch-data', [ 'methods' => 'GET', 'callback' => 'fetch_and_store_data', ]); }); That’s where I struggled most and felt like this all took wayyyyy too much time. Well, that and sparring with the block itself. I find it super hard to get the front and back end components to sync up and, honestly, a lot of that code looks super redundant if you were to scope it out. That’s another story altogether. Enjoy reading what we’re reading! I put a page together that pulls in the 10 most recent items with a link to subscribe to the full feed. Creating a “Starred” Feed originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
Chris’ Corner: HTML
by: Chris Coyier Mon, 20 Jan 2025 16:31:11 +0000 HTML is fun to think about. The old classic battle of “HTML is a programming language” has surfaced in the pages of none other than WIRED magazine. I love this argument, not even for it’s merit, but for the absolutely certainty that you will get people coming out of the woodwork to tell you that HTML, is not, in fact, a programming language. Each of them will have their own exotic and deeply personal reasons why. I honestly don’t even care or believe their to be any truth to be found in the debate, but I find it fascinating as a social experiment. It’s like cracking an IE 6 “joke” at a conference. You will get laughs. I wrote a guest blog post Relatively New Things You Should Know about HTML Heading Into 2025 at the start of the year here which had me thinking about it anyway. So here’s more! You know there are those mailto: “protocol” style links you can use, like: <a href="mailto:chriscoyier@gmail.com">Email Chris</a> And they work fine. Or… mostly fine. They work if there is an email client registered on the device. That’s generally the case, but it’s not 100%. And there are more much more esoteric ones, as Brian Kardell writes: A tel: link on my Mac tries to open FaceTime. What does it do on a computer with no calling capability at all, like my daughter’s Fire tablet thingy? Nothing, probably. Just like clicking on a skype: link on my computer here, which doesn’t have Skype installed does: nothing. A semantic HTML link element that looks and clicks like any other link that does nothing is, well, it’s not good. Brian spells out a situation where it’s extra not good, where a link could say like “Call Pizza Parlor” with the actual phone number buried behind the scenes in HTML, whereas if it was just a phone number, mobile browser that support it would automatically turn it into a clickable link, which is surely better. Every once in a while I get excited about the prospect of writing HTML email with just regular ol’ semantic HTML that you’d write anywhere else. And to be fair: some people absolutely do that and it’s interesting to follow those developments. The last time I tried to get away with “no tables”, the #1 thing that stops me is that you can’t get a reasonable width and centered layout without them in old Outlook. Oh well, that’s the job sometimes. Ambiguity. That’s one thing that there is plenty of in HTML and I suspect people’s different brains handle it quite differently. Some people try something and it if works they are pleased with that and move on. “Works” being a bit subjective of course, since works on the exact browser you’re using at the time as a developer isn’t necessarily reflective of all users. Some people absolutely fret over the correct usage of HTML in all sorts of situations. That’s my kinda people. In Stephanie Eckles’ A Call for Consensus on HTML Semantics she lists all sorts of these ambiguities, honing in on particularly tricky ones where there are certainly multiple ways to approach it. While I’m OK with the freedom and some degree of ambiguity, I like to sweat the semantics and kinda do wish there were just emphatically right answers sometimes. Wanna know why hitting an exact markup pattern matters sometimes? Aside from good accessibility and potentially some SEO concern, sometimes you get good bonus behavior. Simon Willison blogged about Footnotes that work in RSS readers, which is one such situation, building on some thoughts and light research I had done. This is pretty niche, but if you do footnotes just exactly so you’ll get very nice hover behavior in NetNewsWire for footnotes, which happens to be an RSS reader that I like. They talk about paving the cowpaths in web standards. Meaning standardizing ideas when it’s obvious authors are doing it a bunch. I, for one, have certainly seen “spoilers” implemented quite a bit in different ways. Tracy Durnell wonders if we should just add it to HTML directly.
-
Best Kodi Builds to Spice Up Your Media Server Experience
by: Abhishek Kumar While I’ve always enjoyed Kodi’s default skin, I’ve found that it can get a bit "boring" after a while. That’s when I started exploring Kodi builds, these pre-packaged setups not only refresh the interface but also bring in various features and add-ons that make the experience more exciting. After spending some time fiddling with different builds, I’ve collected the ones I find particularly interesting and amazing. Whether you're new to Kodi or looking for a fresh look, these builds will definitely take your streaming game to the next level. What's the point of Kodi builds?Kodi, by default, gives you the freedom to customize everything from the interface to the content you access. However, this can sometimes mean a lot of manual work, like searching for and installing individual add-ons for movies, TV shows, live sports, and more. While this gives you control, it can be time-consuming, especially for beginners. In simple words, Kodi builds bundle everything you need into one pre-configured setup, from add-ons to custom settings, saving you time and effort. Instead of piecing everything together yourself, you get a fully functional and visually appealing interface right from the start. How to install Kodi builds?Installing Kodi build is a straightforward process. Follow these steps to get started: Enable Unknown Sources: Open Kodi, go to Settings > System > Add-ons, and toggle Unknown sources on. (Accept the warning) Add the Repository: In Settings > File Manager, click Add Source, enter the repository URL, name it, and save. Install the Build: Go to Add-ons > Install from zip file, select the repository you just added, and install the build wizard (like Chef Wizard or Doomzday Wizard). Open the wizard from your Program Add-ons, pick your desired build, and follow the on-screen steps to install it. Restart Kodi, and your new build will be ready to use! 📋 All the builds mentioned in this list are designed to work with Kodi 21 Omega, which is the latest release of Kodi right now. Some of these builds may also be compatible with earlier versions like Kodi 20 Nexus, 19 Matrix, and 18 Leia, and I’ve pointed those out where applicable. 1. Doomzday Nova Whether you're using a low-RAM device like FireStick or an Android TV Box, or you have a powerful computer or SBC, Doomzday has something for everyone. The Nova TV build, for example, is a lightweight option that runs smoothly on lower-spec devices, while other feature-rich builds are perfect for high-end systems. With a variety of popular Kodi add-ons pre-installed, you can easily access all your favorite content in one place. Key Features: Lightweight builds for low-RAM devices (e.g., Raspberry Pi 3, FireStick) Feature-rich builds for high-end devices Pre-installed popular add-ons like Asgard, The Crew, and more Easy-to-navigate interface with different categories Supports a wide range of content: Movies, TV Shows, Sports, Live TV, etc. Includes 4K streaming options (Debrid 4K) Frequent updates and improvements Free and premium streaming options (Debrid support) Access to specialized content like documentaries and family-friendly shows Doomzday 2. Diggz Xenon Diggz Xenon is often regarded as one of the best Kodi builds, and for good reason. Its futuristic interface, vast content library, and a solid collection of add-ons make it a top choice for cord-cutters. Located within the Chef Wizard, Xenon offers both "Debrid" and "Free" versions, allowing users to choose based on their needs. The Debrid version requires a Real-Debrid account to access higher-quality streaming links, while the Free version skips the need for a debrid service. With the addition of the AIO (All-In-One) update, users can now preview builds before selecting, making it even easier to find the perfect setup. Key Features: Sleek, futuristic interface with smooth navigation Two versions: Debrid (for higher-quality links) and Free (no debrid required) Extensive content library covering Movies, TV Shows, Sports, and more Located inside the Chef Wizard, which houses other high-quality builds AIO (All-In-One) update for previewing builds before installation Includes popular add-ons like Umbrella, Seren, FEN, and Asgard Regular updates for improved functionality and content Excellent for both new and experienced Kodi users Great support for both free and premium streaming options Diggz Xenon 3. Aspire Aspire is a well-regarded build in the Kodi community, known for its sleek design and solid performance. It strikes a great balance between style and functionality, making it an excellent choice for users who want both aesthetics and practicality. Aspire works smoothly on a variety of devices, including lower-spec options like the Onn, Google TV Box and Fire TV Stick Lite. It can be installed through the Doomzday Wizard or EzzerMans Wizard, offering flexibility in how you set it up. Key Features: Sleek, stylish design with a user-friendly interface Small size (267 MB), making it ideal for lower-spec devices Packed with content including on-demand titles and live channels Supports integration with debrid services for enhanced performance Can be installed via Doomzday Wizard or EzzerMans Wizard Works well on devices like Fire TV Stick Lite and onn. Google TV Box Includes popular add-ons like Diggz Free99, Ghost, and Magic Dragon Smooth streaming experience with minimal buffering Regular updates to keep the build fresh and functional Great for both casual viewers and avid streamers Aspire 4. Grindhouse Whether you're looking for lightweight builds or feature-rich setups, Grindhouse has something for everyone. It’s home to over a dozen builds, including popular ones like AR Build, Blue, Decades, Horror, Jaws, and Pin Up. These builds are designed to provide an all-in-one experience, so you don’t need separate outlets for movies, TV shows, and live programming. The sleek, dim-themed interface is easy to navigate, with sections for Builds, Maintenance, Backup/Restore, Tools, and more. Grindhouse continues to be a go-to repository for many Kodi users, and it’s easy to see why it made it to our list. Key Features: Diverse collection of builds, from lightweight to feature-rich All-in-one builds for movies, TV shows, and live programming Easy-to-navigate, sleek, dim-themed interface Includes popular builds like AR Build, Blue, Decades, and more Sections for Builds, Maintenance, Backup/Restore, Tools, and Close Continually updated and maintained for optimal performance Ideal for users who want a variety of content in one place Popular among Kodi users for its versatility and ease of use No need for multiple add-ons to access all types of content Simple setup and installation process Grindhouse 5. Plutonium Plutonium is a lightweight, visually engaging build with a colorful interface that makes it a great choice for devices with limited storage. It’s designed primarily for Video On-Demand (VOD) content, offering a packed library of movies and TV series. While it doesn’t include live TV channels, this simplicity helps it run smoothly and quickly. If you already have an IPTV service, Plutonium might be the perfect build to complement your setup. The latest update from EzzerMan ensures compatibility with Kodi 21, continuing to deliver an optimized, user-friendly experience. Key Features: Colorful, engaging user interface Extensive library of movies and TV series for on-demand streaming No live TV channels, but ideal for users with IPTV services Simple setup and navigation for easy use Optimized for streaming video content without buffering Available through EzzerMan’s Wizard, alongside other notable builds Easy-to-install and quick to get started Frequent updates to ensure smooth performance Plutonium 6. Xontrix Xontrix is a powerful all-in-one Kodi build that offers both on-demand content and live TV channels. It’s housed in the popular Chains Repository, known for its high-quality builds and addons. Installation is straightforward, and the build works seamlessly right after download. The user-friendly interface allows easy navigation between content categories and addons, making it simple to find what you're looking for. Xontrix also features a dedicated Kids section for family-friendly content and offers immersive music options. For optimal performance, integrating a premium resolving service like Real-Debrid is recommended, as many of the build’s addons are “premium” options. Key Features: All-in-one build with both on-demand content and live TV channels Easy installation and flawless performance right after download User-friendly interface with categories for quick navigation Includes a Kids section for family-friendly content Music options for an immersive audio experience Best used with a premium resolving service like Real-Debrid for enhanced performance Located in the reputable Chains Repository Customizable settings to adjust categories and services Supports a variety of popular addons Xontrix 7. Green Monster Green Monster is a visually impressive and versatile Kodi build known for its lightweight design and top-notch video add-ons. It offers a variety of categories, making it a great choice for streaming movies, TV shows, and live channels. The build has been around for several years and continues to receive frequent updates. Although it may take a few minutes to set up after installation due to its slightly heavier size, the wait is worth it. Once installed, you’ll find a wide range of content options that can be easily customized to suit your needs. Key Features: Impressive user-interface with a visually appealing design Lightweight yet versatile with a variety of categories Frequently updated by developers to ensure a smooth experience Great for streaming movies, TV shows, and live channels Slightly heavier than other builds, so it may take time to set up initially Provides a wide range of content choices once installed Customizable settings to adjust to your preferences Top add-ons for enhanced streaming experience Green Monster 8. Misfit Mods Lite Misfit Mods is back and better than ever! Known for its sleek and modern layout, this build has been a favorite among Kodi users, especially those who used it on Kodi 19 Matrix. Now, with compatibility for Kodi 21 Omega, it’s even more accessible. Misfit Mods Lite offers thousands of on-demand movies and TV shows, along with hundreds of live channels. It also features categories for children's shows and music, making it a versatile option for the entire family. For an enhanced experience, integrating a premium resolving service like Premiumize, AllDebrid, or LinkSnappy is highly recommended. Key Features: Sleek and modern user-interface for easy navigation Thousands of on-demand movies and TV shows Hundreds of live channels available Dedicated categories for children’s shows and music Ideal for users looking for a well-rounded build Best experience with Premiumize, AllDebrid, or LinkSnappy integration Simple installation and quick setup Regular updates to ensure smooth performance Misfit Mods Lite 9. Superman The Superman Kodi build is a fan favorite, known for its Superman-themed interface and versatile content options. Whether you're into movies, TV shows, live channels, or sports, this build has it all. It even features a dedicated “Marvel & DC” category for superhero content, making it a perfect choice for comic book fans. The user interface is easy to navigate, ensuring a smooth experience on any device. For the best streaming performance, it’s recommended to integrate a cloud provider. Key Features: Superman-themed interface with easy navigation Offers movies, TV series, live channels, sports, and a superhero-specific “Marvel & DC” category Smooth user experience on all devices Regularly updated with new content Works well for both beginners and experienced Kodi users Reliable performance with no buffering (with proper cloud integration) Top add-ons for enhanced streaming quality Superman 10. Estuary Switch If you are like me and prefer the classic, familiar look of Kodi, Estuary Switch is the build for you. It uses the default Estuary skin, ensuring that users don’t have to adjust to a new interface. While it doesn't offer an overwhelming number of add-ons, it includes the essentials for basic streaming needs. The build allows users to filter content by Genre, Year, and Decade, making it easy to find what you're looking for. Its simplicity and lightweight nature make it ideal for less powerful streaming devices, offering a smooth experience without unnecessary bloat. Key Features: Classic Kodi interface with the default Estuary skin Easy navigation with content filtering by Genre, Year, and Decade Essential add-ons for basic streaming needs Simple and lightweight, perfect for low-powered devices Familiar home screen layout for quick access to media Great for users who prefer a minimalistic setup Regular updates for optimal performance Estuary Switch Other Notable BuildsDue to space constraints, we couldn’t go into detail about every fantastic build available. However, here are some notable builds worth checking out. Cosmic One A Trakt-compatible build from The Crew repo, offering categories like movies, TV shows, sports, live content, and more. CrewNique Found in both the Chains Build Wizard and The Crew Wizard, this build includes movies, IPTV, TV shows, and sports categories. OneFlix A Debrid-only build described as a “Netflix-style streaming service,” featuring notable add-ons like Ghost, AfFENity, Umbrella, and SEREN. POVico With an interface reminiscent of Kodi’s original aesthetic, this build focuses on movies and TV shows. ConclusionChoosing the best Kodi build ultimately comes down to your personal preferences and streaming needs. Whether you’re drawn to the versatility of Diggz Xenon, the torrent-powered Burst, or the sleek interface of Aspire, there’s no shortage of excellent options to enhance your Kodi experience in 2025. While Kodi builds are legal to install and use, it’s important to remain cautious about the content you access. 🏴☠️ Many builds include third-party add-ons, and users should ensure they only stream publicly available titles to stay on the right side of copyright laws. For safety, stick to trusted sources and scan files for malware before installation. Kodi is a powerful tool, and with the right build, it can transform your media setup into a streaming powerhouse. Enjoy exploring, and happy streaming! 🎞️
-
pwd command in Linux
by: Satoshi Nakamoto Sat, 18 Jan 2025 10:27:48 +0530 The pwd command in Linux, short for Print Working Directory, displays the absolute path of the current directory, helping users navigate the file system efficiently. It is one of the first commands you use when you start learning Linux. And if you are absolutely new, take advantage of this free course: Learn the Basic Linux Commands in an Hour [With Videos]Learn the basics of Linux commands in this crash course.Linux HandbookAbhishek Prakashpwd command syntaxLike other Linux commands, pwd also follows this syntax. pwd [OPTIONS]Here, you have [OPTIONS], which are used to modify the default behavior of the pwd command. If you don't use any options with the pwd command, it will show the physical path of the current working directory by default. Unlike many other Linux commands, pwd does not come with many flags and has only two important flags: Option Description -L Displays the logical current working directory, including symbolic links. -P Displays the physical current working directory, resolving symbolic links. --help Displays help information about the pwd command. --version Outputs version information of the pwd command. Now, let's take a look at the practical examples of the pwd command. 1. Display the current locationThis is what the pwd command is famous for, giving you the name of the directory where you are located or from where you are running the command. pwd2. Display the logical path including symbolic links If you want to display logical paths and symbolic links, all you have to do is execute the pwd command with the -L flag as shown here: pwd -LTo showcase its usage, I will need to go through multiple steps so stay with me. First, go to the tmp directory using the cd command as shown here: cd /tmpNow, let's create a symbolic link which is pointing to the /var/log directory: ln -s /var/log log_linkFinally, change your directory to log_link and use the pwd command with the -L flag: pwd -LIn the above steps, I went to the /tmp directory and then created a symbolic link which points to a specific location (/var/log) and then I used the pwd command and it successfully showed me the symbolic link. 3. Display physical path resolving symbolic links The pwd command is one of the ways to resolve symbolic links. Meaning, you'll see the destination directory where soft link points to. Use the -P flag: pwd -PI am going to use the symbolic link which I had created in the 2nd example. Here's what I did: Navigate to /tmp.Create a symbolic link (log_link) pointing to /var/log.Change into the symbolic link (cd log_link)Once you perform all the steps, you can check the real path of the symbolic link: pwd -P4. Use pwd command in shell scriptsTo get the current location in a bash shell script, you can store the value of the pwd command in a variable and later on print it as shown here: current_dir=$(pwd) echo "You are in $current_dir"Now, if you execute this shell script in your home directory like I did, you will get similar output to mine: Bonus: Know the previous working directoryThis is not exactly the use of the pwd command but it is somewhat related and interesting. There is an environment variable in Linux called OLDPWD which stores the previous working directory path. This means you can get the previous working directory by printing the value of this environment variable: echo "$OLDPWD"ConclusionThis was a quick tutorial on how you can use the pwd command in Linux where I went through syntax, options, and some practical examples of it. I hope you will find them helpful. If you have any queries or suggestions, leave us a comment.