Jump to content

Blogger

Blog Bot
  • Joined

  • Last visited

    Never

Everything posted by Blogger

  1. by: Juan Diego Rodríguez Wed, 29 Jan 2025 14:13:53 +0000 Have you ever stumbled upon something new and went to research it just to find that there is little-to-no information about it? It’s a mixed feeling: confusing and discouraging because there is no apparent direction, but also exciting because it’s probably new to lots of people, not just you. Something like that happened to me while writing an Almanac’s entry for the @view-transition at-rule and its types descriptor. You may already know about Cross-Document View Transitions: With a few lines of CSS, they allow for transitions between two pages, something that in the past required a single-app framework with a side of animation library. In other words, lots of JavaScript. To start a transition between two pages, we have to set the @view-transition at-rule’s navigation descriptor to auto on both pages, and that gives us a smooth cross-fade transition between the two pages. So, as the old page fades out, the new page fades in. @view-transition { navigation: auto; } That’s it! And navigation is the only descriptor we need. In fact, it’s the only descriptor available for the @view-transition at-rule, right? Well, turns out there is another descriptor, a lesser-known brother, and one that probably envies how much attention navigation gets: the types descriptor. What do people say about types? Cross-Documents View Transitions are still fresh from the oven, so it’s normal that people haven’t fully dissected every aspect of them, especially since they introduce a lot of new stuff: a new at-rule, a couple of new properties and tons of pseudo-elements and pseudo-classes. However, it still surprises me the little mention of types. Some documentation fails to even name it among the valid  @view-transition descriptors. Luckily, though, the CSS specification does offer a little clarification about it: To be more precise, types can take a space-separated list with the names of the active types (as <custom-ident>), or none if there aren’t valid active types for that page. Name: types For: @view-transition Value: none | <custom-ident>+ Initial: none So the following values would work inside types: @view-transition { navigation: auto; types: bounce; } /* or a list */ @view-transition { navigation: auto; types: bounce fade rotate; } Yes, but what exactly are “active” types? That word “active” seems to be doing a lot of heavy lifting in the CSS specification’s definition and I want to unpack that to better understand what it means. Active types in view transitions The problem: A cross-fade animation for every page is good, but a common thing we need to do is change the transition depending on the pages we are navigating between. For example, on paginated content, we could slide the content to the right when navigating forward and to the left when navigating backward. In a social media app, clicking a user’s profile picture could persist the picture throughout the transition. All this would mean defining several transitions in our CSS, but doing so would make them conflict with each other in one big slop. What we need is a way to define several transitions, but only pick one depending on how the user navigates the page. The solution: Active types define which transition gets used and which elements should be included in it. In CSS, they are used through :active-view-transition-type(), a pseudo-class that matches an element if it has a specific active type. Going back to our last example, we defined the document’s active type as bounce. We could enclose that bounce animation behind an :active-view-transition-type(bounce), such that it only triggers on that page. /* This one will be used! */ html:active-view-transition-type(bounce) { &::view-transition-old(page) { /* Custom Animation */ } &::view-transition-new(page) { /* Custom Animation */ } } This prevents other view transitions from running if they don’t match any active type: /* This one won't be used! */ html:active-view-transition-type(slide) { &::view-transition-old(page) { /* Custom Animation */ } &::view-transition-new(page) { /* Custom Animation */ } } I asked myself whether this triggers the transition when going to the page, when out of the page, or in both instances. Turns out it only limits the transition when going to the page, so the last bounce animation is only triggered when navigating toward a page with a bounce value on its types descriptor, but not when leaving that page. This allows for custom transitions depending on which page we are going to. The following demo has two pages that share a stylesheet with the bounce and slide view transitions, both respectively enclosed behind an :active-view-transition-type(bounce) and :active-view-transition-type(slide) like the last example. We can control which page uses which view transition through the types descriptor. The first page uses the bounce animation: @view-transition { navigation: auto; types: bounce; } The second page uses the slide animation: @view-transition { navigation: auto; types: slide; } You can visit the demo here and see the full code over at GitHub. The types descriptor is used more in JavaScript The main problem is that we can only control the transition depending on the page we’re navigating to, which puts a major cap on how much we can customize our transitions. For instance, the pagination and social media examples we looked at aren’t possible just using CSS, since we need to know where the user is coming from. Luckily, using the types descriptor is just one of three ways that active types can be populated. Per spec, they can be: Passed as part of the arguments to startViewTransition(callbackOptions) Mutated at any time, using the transition’s types Declared for a cross-document view transition, using the types descriptor. The first option is when starting a view transition from JavaScript, but we want to trigger them when the user navigates to the page by themselves (like when clicking a link). The third option is using the types descriptor which we already covered. The second option is the right one for this case! Why? It lets us set the active transition type on demand, and we can perform that change just before the transition happens using the pagereveal event. That means we can get the user’s start and end page from JavaScript and then set the correct active type for that case. I must admit, I am not the most experienced guy to talk about this option, so once I demo the heck out of different transitions with active types I’ll come back with my findings! In the meantime, I encourage you to read about active types here if you are like me and want more on view transitions: View transition types in cross-document view transitions (Bramus) Customize the direction of a view transition with JavaScript (Umar Hansa) What on Earth is the `types` Descriptor in View Transitions? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  2. by: LHB Community Wed, 29 Jan 2025 18:26:26 +0530 Kubernetes is a powerful container orchestration platform that enables developers to manage and deploy containerized applications with ease. One of its key components is the ReplicaSet, which plays a critical role in ensuring high availability and scalability of applications. In this guide, we will explore the ReplicaSet, its purpose, and how to create and manage it effectively in your Kubernetes environment. What is a ReplicaSet in Kubernetes?A ReplicaSet in Kubernetes is a higher-level abstraction that ensures a specified number of pod replicas are running at all times. If a pod crashes or becomes unresponsive, the ReplicaSet automatically creates a new pod to maintain the desired state. This guarantees high availability and resilience for your applications. The key purposes of a ReplicaSet include: Scaling Pods: ReplicaSets manage the replication of pods, ensuring the desired number of replicas are always running.High Availability: Ensures that your application remains available even if one or more pods fail.Self-Healing: Automatically replaces failed pods to maintain the desired state.Efficient Workload Management: Helps distribute workloads across nodes in the cluster.How Does a ReplicaSet Work?ReplicaSets rely on selectors to match pods using labels. It uses these selectors to monitor the pods and ensures the actual number of pods matches the specified replica count. If the number is less than the desired count, new pods are created. If it’s greater, excess pods are terminated. Creating a ReplicaSetTo create a ReplicaSet, you define its configuration in a YAML file. Here’s an example: Example YAML Configuration apiVersion: apps/v1 kind: ReplicaSet metadata: name: nginx-replicaset labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.21 ports: - containerPort: 80 In this YAML file: replicas: Specifies the desired number of pod replicas.selector: Matches pods with the label app=nginx.template: Defines the pod’s specifications, including the container image and port.Deploying a ReplicaSetOnce you have the YAML file ready, follow these steps to deploy it in your Kubernetes cluster. Apply the YAML configuration to create the ReplicaSet: kubectl apply -f nginx-replicaset.yaml Verify that the ReplicaSet was created and the pods are running: kubectl get replicaset Output: NAME DESIRED CURRENT READY AGE nginx-replicaset 3 3 3 5s View the pods to check the pods created by the ReplicaSet: kubectl get pods Output: NAME READY STATUS RESTARTS AGE nginx-replicaset-xyz12 1/1 Running 0 10s nginx-replicaset-abc34 1/1 Running 0 10s nginx-replicaset-lmn56 1/1 Running 0 10s Scaling a ReplicaSetYou can easily scale the number of replicas in a ReplicaSet. For example, to scale the above ReplicaSet to 5 replicas: kubectl scale replicaset nginx-replicaset --replicas=5 Verify the updated state: kubectl get replicaset Output: NAME DESIRED CURRENT READY AGE nginx-replicaset 5 5 5 2m Learn Kubernetes OperatorLearn to build, test and deploy Kubernetes Opeartor using Kubebuilder as well as Operator SDK in this course.Linux HandbookTeam LHBConclusionA ReplicaSet is an essential component of Kubernetes, ensuring the desired number of pod replicas are running at all times. By leveraging ReplicaSets, you can achieve high availability, scalability, and self-healing for your applications with ease. Whether you’re managing a small application or a large-scale deployment, understanding ReplicaSets is crucial for effective workload management. ✍️Author: Hitesh Jethwa has more than 15+ years of experience with Linux system administration and DevOps. He likes to explain complicated topics in easy to understand way.
  3. by: Satoshi Nakamoto Wed, 29 Jan 2025 16:53:22 +0530 A few years ago, we witnessed a shift to containers and in current times, I believe containers have become an integral part of the IT infrastructure for most companies. Traditional monitoring tools often fall short in providing the visibility needed to ensure performance, security, and reliability. According to my experience, monitoring resource allocation is the most important part of deploying containers and that is why I found the top container monitoring solutions offering real-time insights into your containerized environments. Top Container Monitoring Solutions Before I jump into details, here's a brief of all the tools which I'll be discussing in a moment: Tool Pricing & Plans Free Tier? Key Free Tier Features Key Paid Plan Features Middleware Free up to 100GB; pay-as-you-go at $0.3/GB; custom enterprise plans Yes Up to 100GB data, 1k RUM sessions, 20k synthetic checks, 14-day retention Unlimited data volume; data pipeline & ingestion control; single sign-on; dedicated support Datadog Free plan (limited hosts & 1-day metric retention); Pro starts at $15/host/month; Enterprise from $23 Yes Basic infrastructure monitoring for up to 5 hosts; limited metric retention Extended retention, advanced anomaly detection, over 750 integrations, multi-cloud support Prometheus & Grafana Open-source; no licensing costs Yes Full-featured metrics collection (Prometheus), custom dashboards (Grafana) Self-managed support only; optional managed services through third-party providers Dynatrace 15-day free trial; usage-based: $0.04/hour for infrastructure-only, $0.08/hour for full-stack Trial Only N/A (trial only) AI-driven root cause analysis, automatic topology discovery, enterprise support, multi-cloud observability Sematext Free plan (Basic) with limited container monitoring; paid plans start at $0.007/container/hour Yes Live metrics for a small number of containers, 30-minute retention, limited alert rules Increased container limits, extended retention, unlimited alert rules, full-stack monitoring Sysdig Free tier; Sysdig Monitor starts at $20/host/month; Sysdig Secure is $60/host/month Yes Basic container monitoring, limited metrics and retention Advanced threat detection, vulnerability management, compliance checks, Prometheus support SolarWinds No permanent free plan; pricing varies by module (starts around $27.50/month or $2995 single license) Trial Only N/A (trial only) Pre-built Docker templates, application-centric mapping, hardware health, synthetic monitoring Splunk Observability Cloud starts at $15/host/month (annual billing); free trial available Trial Only N/A (trial only) Real-time log and metrics analysis, AI-based anomaly detection, multi-cloud integrations, advanced alerting MetricFire Paid plans start at $19/month; free trial offered Trial Only N/A (trial only) Integration with Graphite and Prometheus, customizable dashboards, real-time alerts SigNoz Open-source (self-hosted) or custom paid support Yes Full observability stack (metrics, traces, logs) with no licensing costs Commercial support, managed hosting services, extended retention options Here, "N/A (trial only)" means that the tool does not offer a permanent free tier but provides a limited-time free trial for users to test its features. After the trial period ends, users must subscribe to a paid plan to continue using the tool. Essentially, there is no free version available for long-term use—only a temporary trial. 1. MiddlewareMiddleware is an excellent choice for teams looking for a free or scalable container monitoring solution. It provides pre-configured dashboards for Kubernetes environments and real-time visibility into container health. With a free tier supporting up to 100GB of data and a pay-as-you-go model at $0.3/GB thereafter, it’s ideal for startups or small teams. Key features: Pre-configured dashboards for KubernetesReal-time metrics trackingAlerts for critical eventsCorrelation of metrics with logs and tracesPros: Free tier availableEasy setup with minimal configurationScalable pricing modelCons: Limited advanced features compared to premium toolsTry Middleware2. DatadogDatadog is a premium solution offering observability across infrastructure, applications, and logs. Its auto-discovery feature makes it particularly suited for dynamic containerized environments. The free plan supports up to five hosts with limited retention. Paid plans start at $15 per host per month. Key features: Real-time performance trackingAnomaly detection using MLAuto-discovery of new containersDistributed tracing and APMPros: Extensive integrations (750+)User-friendly interfaceAdvanced visualization toolsCons: High cost for small teamsPricing can vary based on usage spikesTry Datadog3. Prometheus & GrafanaThis open-source duo provides powerful monitoring and visualization capabilities. Prometheus has an edge in metrics collection with its PromQL query language, while Grafana offers stunning visualizations. This eventually makes it perfect for teams seeking customization without licensing costs. Key features: Time-series data collectionFlexible query language (PromQL)Customizable dashboardsIntegrated alerting systemPros: Free to useHighly customizableStrong community supportCons: Requires significant setup effortLimited out-of-the-box functionalityTry Prometheus & Grafana4. DynatraceDynatrace is an AI-powered observability platform designed for large-scale hybrid environments. It automates topology discovery and offers you deep insights into containerized workloads. Pricing starts at $0.04/hour for infrastructure-only monitoring. Key features: AI-powered root cause analysisAutomatic topology mappingReal-user monitoringCloud-native support (Kubernetes/OpenShift)Pros: Automated configurationScalability for large environmentsEnd-to-end visibilityCons: Expensive for smaller teamsProprietary platform limits flexibilityTry Dynatrace5. SematextSematext is a lightweight tool that allows users to monitor metrics and logs across Docker and Kubernetes platforms. Its free plan supports basic container monitoring with limited retention and alerting rules. Paid plans start at just $0.007/container/hour. Key features: Unified dashboard for logs and metricsReal-time insights into containers and hostsAuto-discovery of new containersAnomaly detection and alertingPros: Affordable pricing plansSimple setup processFull-stack observability featuresCons: Limited advanced features compared to premium toolsTry Sematext7. SolarWindsSolarWinds offers an intuitive solution for SMBs needing straightforward container monitoring. While it doesn’t offer a permanent free plan, its pricing starts at around $27.50/month or $2995 as a one-time license fee. Key features: Pre-built Docker templatesApplication-centric performance trackingHardware health monitoringDependency mappingPros: Easy deployment and setupOut-of-the-box templatesSuitable for smaller teamsCons: Limited flexibility compared to open-source toolsTry SolarWinds8. SplunkSplunk not only provides log analysis but also provides strong container monitoring capabilities through its Observability Cloud suite. Pricing starts at $15/host/month on annual billing. Key features: Real-time log and metrics analysisAI-based anomaly detectionCustomizable dashboards and alertsIntegration with OpenTelemetry standardsPros: Powerful search capabilitiesScalable architectureExtensive integrationsCons: High licensing costs for large-scale deploymentsTry Splunk9. MetricFire It simplifies container monitoring by offering customizable dashboards and seamless integration with Kubernetes and Docker. MetricFire is ideal for teams looking for a reliable hosted solution without the hassle of managing infrastructure. Pricing starts at $19/month. Key features: Hosted Graphite and Grafana dashboardsReal-time performance metricsIntegration with Kubernetes and DockerCustomizable alerting systemsPros: Easy setup and configurationScales effortlessly as metrics growTransparent pricing modelStrong community supportCons: Limited advanced features compared to proprietary toolsRequires technical expertise for full customizationTry MetricFire10. SigNozSigNoz is an open-source alternative to proprietary APM tools like Datadog and New Relic. It offers a platform for logs, metrics, and traces under one interface. With native OpenTelemetry support and a focus on distributed tracing for microservices architectures, SigNoz is perfect for organizations seeking cost-effective yet powerful observability solutions. Key features: Distributed tracing for microservicesReal-time metrics collectionCentralized log managementCustomizable dashboardsNative OpenTelemetry supportPros: Completely free if self-hostedActive development communityCost-effective managed cloud optionComprehensive observability stackCons: Requires infrastructure setup if self-hostedLimited enterprise-level support compared to proprietary toolsTry SigNozEvaluate your infrastructure complexity and budget to select the best tool that aligns with your goals!
  4. By: Janus Atienza Tue, 28 Jan 2025 23:16:45 +0000 As a digital marketing agency, your focus is to provide high-quality services to your clients while ensuring that operations run smoothly. However, managing the various components of SEO, such as link-building, can be time-consuming and resource-draining. This is where white-label link-building services come into play. By outsourcing your link-building efforts, you can save time and resources, allowing your agency to focus on more strategic tasks that directly contribute to your clients’ success. Below, we’ll explore how these services can benefit your agency in terms of time and resource management. Focus on Core Competencies When you choose to outsource your link-building efforts to a white-label service, it allows your agency to focus on your core competencies. As an agency, you may excel in content strategy, social media marketing, or paid advertising. However, link-building requires specialized knowledge, experience, and resources. A white-label link-building service can handle this aspect of SEO for you, freeing up time for your team to focus on what they do best. This way, you can maintain a high level of performance in other areas without spreading your team too thin. Eliminate the Need for Specialized Staff Building a successful link-building strategy requires expertise, which may not be available within your existing team. Hiring specialized staff to manage outreach campaigns, content creation, and link placements can be expensive and time-consuming. However, white-label link-building services already have the necessary expertise and resources in place. You won’t need to hire or train new employees to handle this aspect of SEO. The service provider’s team can execute campaigns quickly and effectively, allowing your agency to scale without expanding its internal workforce. Access to Established Relationships and Networks Link-building is not just about placing links on any website; it’s about building relationships with authoritative websites in your client’s industry, especially within relevant open-source projects and Linux communities. This process takes time to establish and requires continuous effort. A white-label link-building service typically has established relationships with high-authority websites, bloggers, and influencers across various industries. By leveraging these networks, they can secure quality backlinks faster and more efficiently than your agency could on its own. This reduces the time spent on outreach and relationship-building, ensuring that your client’s SEO efforts are moving forward without delays. For Linux-focused sites, this can include participation in relevant forums and contributing to open-source projects. Efficient Campaign Execution White-label link-building services are designed to execute campaigns efficiently. These agencies have streamlined processes and advanced tools that allow them to scale campaigns while maintaining quality. They can manage multiple campaigns at once, ensuring that your clients’ link-building needs are met in a timely manner. By outsourcing to a provider with a proven workflow, you can avoid the inefficiencies associated with trying to build an in-house link-building team. This leads to faster execution, better results, and more satisfied clients. Cost-Effectiveness Managing link-building in-house can be costly. Aside from the salaries and benefits of hiring staff, you’ll also need to invest in tools, software, and outreach efforts. White-label link-building services, on the other hand, offer more cost-effective solutions. These providers typically offer packages that include all necessary tools, such as backlink analysis software, outreach platforms, and reporting tools, which can be expensive to purchase and maintain on your own. By outsourcing, you save money on infrastructure and overhead costs, all while getting access to the best tools available. Reduce Time Spent on Reporting and Analysis Effective link-building campaigns require consistent monitoring, analysis, and reporting. Generating reports, tracking backlink quality, and assessing the impact of links on search rankings can be time-consuming tasks. When you outsource this responsibility to a white-label link-building service, they will handle reporting on your behalf. The provider will deliver customized reports that highlight key metrics like the number of backlinks acquired, domain authority, traffic increases, and overall SEO performance. This allows you to deliver the necessary information to your clients while saving time on report generation and analysis. For Linux-based servers, this can also involve analyzing server logs for SEO-related issues. Scalability and Flexibility As your agency grows, so does the demand for SEO services. One of the challenges agencies face is scaling their link-building efforts to accommodate more clients or larger campaigns. A white-label link-building service offers scalability and flexibility, meaning that as your client base grows, the provider can handle an increased volume of link-building efforts without compromising on quality. Whether you’re managing a single campaign or hundreds of clients, a reliable white-label service can adjust to your needs and ensure that every client receives the attention their SEO efforts deserve. Mitigate Risks Associated with Link-Building Link-building, if not done properly, can result in penalties from search engines, harming your client’s SEO performance. Managing link-building campaigns in-house without proper knowledge of SEO best practices can lead to mistakes, such as acquiring low-quality or irrelevant backlinks. White-label link-building services are experts in following search engine guidelines and using ethical link-building practices. By outsourcing, you reduce the risk of penalties, ensuring that your clients’ SEO efforts are safe and aligned with best practices. Stay Up-to-Date with SEO Trends SEO is an ever-evolving field, and staying up-to-date with the latest trends and algorithm updates can be a full-time job. White-label link-building services are dedicated to staying current with industry changes. By outsourcing your link-building efforts, you can be sure that the provider is implementing the latest techniques and best practices in their campaigns. This ensures that your client’s link-building strategies are always aligned with search engine updates, maximizing their chances of success. This includes familiarity with SEO tools that run on Linux, such as command-line tools and open-source crawlers, and understanding the nuances of optimizing websites hosted on Linux servers. Conclusion White-label link-building services offer significant time and resource savings for digital marketing agencies. By outsourcing link-building efforts, your agency can focus on core business areas, eliminate the need for specialized in-house staff, and streamline campaign execution. The cost-effectiveness and scalability of these services also make them an attractive option for agencies looking to grow their SEO offerings without overextending their resources. Especially for clients using Linux-based infrastructure, leveraging a white-label service with expertise in this area can be a significant advantage. With a trusted white-label link-building partner, you can deliver high-quality backlinks to your clients, improve their SEO rankings, and drive long-term success. The post White-Label Link Building for Linux-Based Websites: Saving Time and Resources appeared first on Unixmen.
  5. Blogger posted a blog entry in Programmer's Corner
    by: aiparabellum.com Tue, 28 Jan 2025 07:28:06 +0000 In the digital age, where online privacy and security are paramount, tools like Sigma Browser are gaining significant attention. Sigma Browser is a privacy-focused web browser designed to provide users with a secure, fast, and ad-free browsing experience. Built with advanced features to protect user data and enhance online anonymity, Sigma Browser is an excellent choice for individuals and businesses alike. In this article, we’ll dive into its features, how it works, benefits, pricing, and more to help you understand why Sigma Browser is a standout in the world of secure browsing. Features of Sigma Browser AI Sigma Browser offers a range of features tailored to ensure privacy, speed, and convenience. Here are some of its key features: Ad-Free Browsing: Enjoy a seamless browsing experience without intrusive ads. Enhanced Privacy: Built-in privacy tools to block trackers and protect your data. Fast Performance: Optimized for speed, ensuring quick page loads and smooth navigation. Customizable Interface: Personalize your browsing experience with themes and settings. Cross-Platform Sync: Sync your data across multiple devices for a unified experience. Secure Browsing: Advanced encryption to keep your online activities private. How It Works Sigma Browser is designed to be user-friendly while prioritizing security. Here’s how it works: Download and Install: Simply download Sigma Browser from its official website and install it on your device. Set Up Privacy Settings: Customize your privacy preferences, such as blocking trackers and enabling encryption. Browse Securely: Start browsing the web with enhanced privacy and no ads. Sync Across Devices: Log in to your account to sync bookmarks, history, and settings across multiple devices. Regular Updates: The browser receives frequent updates to improve performance and security. Benefits of Sigma Browser AI Using Sigma Browser comes with numerous advantages: Improved Privacy: Protects your data from third-party trackers and advertisers. Faster Browsing: Eliminates ads and optimizes performance for quicker loading times. User-Friendly: Easy to set up and use, even for non-tech-savvy individuals. Cross-Device Compatibility: Access your browsing data on any device. Customization: Tailor the browser to suit your preferences and needs. Pricing Sigma Browser offers flexible pricing plans to cater to different users: Free Version: Includes basic features like ad-free browsing and privacy protection. Premium Plan: Unlocks advanced features such as cross-device sync and priority support. Pricing details are available on the official website. Sigma Browser Review Sigma Browser has received positive feedback from users for its focus on privacy and performance. Many appreciate its ad-free experience and the ability to customize the interface. The cross-platform sync feature is also a standout, making it a convenient choice for users who switch between devices. Some users have noted that the premium plan could offer more features, but overall, Sigma Browser is highly regarded for its security and ease of use. Conclusion Sigma Browser is a powerful tool for anyone looking to enhance their online privacy and browsing experience. With its ad-free interface, robust privacy features, and fast performance, it stands out as a reliable choice in the crowded browser market. Whether you’re a casual user or a business professional, Sigma Browser offers the tools you need to browse securely and efficiently. Give it a try and experience the difference for yourself. Visit Website The post Sigma Browser appeared first on AI Parabellum.
  6. Blogger posted a blog entry in Programmer's Corner
    by: aiparabellum.com Tue, 28 Jan 2025 07:28:06 +0000 In the digital age, where online privacy and security are paramount, tools like Sigma Browser are gaining significant attention. Sigma Browser is a privacy-focused web browser designed to provide users with a secure, fast, and ad-free browsing experience. Built with advanced features to protect user data and enhance online anonymity, Sigma Browser is an excellent choice for individuals and businesses alike. In this article, we’ll dive into its features, how it works, benefits, pricing, and more to help you understand why Sigma Browser is a standout in the world of secure browsing. Features of Sigma Browser AI Sigma Browser offers a range of features tailored to ensure privacy, speed, and convenience. Here are some of its key features: Ad-Free Browsing: Enjoy a seamless browsing experience without intrusive ads. Enhanced Privacy: Built-in privacy tools to block trackers and protect your data. Fast Performance: Optimized for speed, ensuring quick page loads and smooth navigation. Customizable Interface: Personalize your browsing experience with themes and settings. Cross-Platform Sync: Sync your data across multiple devices for a unified experience. Secure Browsing: Advanced encryption to keep your online activities private. How It Works Sigma Browser is designed to be user-friendly while prioritizing security. Here’s how it works: Download and Install: Simply download Sigma Browser from its official website and install it on your device. Set Up Privacy Settings: Customize your privacy preferences, such as blocking trackers and enabling encryption. Browse Securely: Start browsing the web with enhanced privacy and no ads. Sync Across Devices: Log in to your account to sync bookmarks, history, and settings across multiple devices. Regular Updates: The browser receives frequent updates to improve performance and security. Benefits of Sigma Browser AI Using Sigma Browser comes with numerous advantages: Improved Privacy: Protects your data from third-party trackers and advertisers. Faster Browsing: Eliminates ads and optimizes performance for quicker loading times. User-Friendly: Easy to set up and use, even for non-tech-savvy individuals. Cross-Device Compatibility: Access your browsing data on any device. Customization: Tailor the browser to suit your preferences and needs. Pricing Sigma Browser offers flexible pricing plans to cater to different users: Free Version: Includes basic features like ad-free browsing and privacy protection. Premium Plan: Unlocks advanced features such as cross-device sync and priority support. Pricing details are available on the official website. Sigma Browser Review Sigma Browser has received positive feedback from users for its focus on privacy and performance. Many appreciate its ad-free experience and the ability to customize the interface. The cross-platform sync feature is also a standout, making it a convenient choice for users who switch between devices. Some users have noted that the premium plan could offer more features, but overall, Sigma Browser is highly regarded for its security and ease of use. Conclusion Sigma Browser is a powerful tool for anyone looking to enhance their online privacy and browsing experience. With its ad-free interface, robust privacy features, and fast performance, it stands out as a reliable choice in the crowded browser market. Whether you’re a casual user or a business professional, Sigma Browser offers the tools you need to browse securely and efficiently. Give it a try and experience the difference for yourself. Visit Website The post Sigma Browser appeared first on AI Parabellum.
  7. by: Chris Coyier Mon, 27 Jan 2025 17:10:10 +0000 I love a good exposé on how a front-end team operates. Like what technology they use, why, and how, particularly when there are pain points and journeys through them. Jim Simon of Reddit wrote one a bit ago about their teams build process. They were using something Rollup based and getting 2-minute build times and spent quite a bit of time and effort switching to Vite and now are getting sub-1-second build times. I don’t know if “wow Vite is fast” is the right read here though, as they lost type checking entirely. Vite means esbuild for TypeScript which just strips types, meaning no build process (locally, in CI, or otherwise) will catch errors. That seems like a massive deal to me as it opens the door to all contributions having TypeScript errors. I admit I’m fascinated by the approach though, it’s kinda like treating TypeScript as a local-only linter. Sure, VS Code complains and gives you red squiggles, but nothing else will, so use that information as you will. Very mixed feelings. Vite always seems to be front and center in conversations about the JavaScript ecosystem these days. The tooling section of this year’s JavaScript Rising Stars: (Interesting how it’s actually Biome that gained the most stars this year and has large goals about being the toolchain for the web, like Vite) Vite actually has the bucks now to make a real run at it. It’s always nail biting and fascinating to see money being thrown around at front-end open source, as a strong business model around all that is hard to find. Maybe there is an enterprise story to capture? Somehow I can see that more easily. I would guess that’s where the new venture vlt is seeing potential. npm, now being owned by Microsoft, certainly had a story there that investors probably liked to see, so maybe vlt can do it again but better. It’s the “you’ve got their data” thing that adds up to me. Not that I love it, I just get it. Vite might have your stack, but we write checks to infrastructure companies. That tinge of worry extends to Bun and Deno too. I think they can survive decently on momentum of developers being excited about the speed and features. I wouldn’t say I’ve got a full grasp on it, but I’ve seen some developers be pretty disillusioned or at least trepidatious with Deno and their package registry JSR. But Deno has products! They have enterprise consulting and various hosting. Data and product, I think that is all very smart. Mabe void(0) can find a product play in there. This all reminds me of XState / Stately which took a bit of funding, does open source, and productizes some of what they do. Their new Store library is getting lots of attention which is good for the gander. To be clear, I’m rooting for all of these companies. They are small and only lightly funded companies, just like CodePen, trying to make tools to make web development better. 💜
  8. by: Andy Clarke Mon, 27 Jan 2025 15:35:44 +0000 Honestly, it’s difficult for me to come to terms with, but almost 20 years have passed since I wrote my first book, Transcending CSS. In it, I explained how and why to use what was the then-emerging Multi-Column Layout module. Hint: I published an updated version, Transcending CSS Revisited, which is free to read online. Perhaps because, before the web, I’d worked in print, I was over-excited at the prospect of dividing content into columns without needing extra markup purely there for presentation. I’ve used Multi-Column Layout regularly ever since. Yet, CSS Columns remains one of the most underused CSS layout tools. I wonder why that is? Holes in the specification For a long time, there were, and still are, plenty of holes in Multi-Column Layout. As Rachel Andrew — now a specification editor — noted in her article five years ago: She’s right. And that’s still true. You can’t style columns, for example, by alternating background colours using some sort of :nth-column() pseudo-class selector. You can add a column-rule between columns using border-style values like dashed, dotted, and solid, and who can forget those evergreen groove and ridge styles? But you can’t apply border-image values to a column-rule, which seems odd as they were introduced at roughly the same time. The Multi-Column Layout is imperfect, and there’s plenty I wish it could do in the future, but that doesn’t explain why most people ignore what it can do today. Patchy browser implementation for a long time Legacy browsers simply ignored the column properties they couldn’t process. But, when Multi-Column Layout was first launched, most designers and developers had yet to accept that websites needn’t look the same in every browser. Early on, support for Multi-Column Layout was patchy. However, browsers caught up over time, and although there are still discrepancies — especially in controlling content breaks — Multi-Column Layout has now been implemented widely. Yet, for some reason, many designers and developers I speak to feel that CSS Columns remain broken. Yes, there’s plenty that browser makers should do to improve their implementations, but that shouldn’t prevent people from using the solid parts today. Readability and usability with scrolling Maybe the main reason designers and developers haven’t embraced Multi-Column Layout as they have CSS Grid and Flexbox isn’t in the specification or its implementation but in its usability. Rachel pointed this out in her article: That’s true. No one would enjoy repeatedly scrolling up and down to read a long passage of content set in columns. She went on: But, let’s face it, thinking very carefully is what designers and developers should always be doing. Sure, if you’re dumb enough to dump a large amount of content into columns without thinking about its design, you’ll end up serving readers a poor experience. But why would you do that when headlines, images, and quotes can span columns and reset the column flow, instantly improving readability? Add to that container queries and newer unit values for text sizing, and there really isn’t a reason to avoid using Multi-Column Layout any longer. A brief refresher on properties and values Let’s run through a refresher. There are two ways to flow content into multiple columns; first, by defining the number of columns you need using the column-count property: CodePen Embed Fallback Second, and often best, is specifying the column width, leaving a browser to decide how many columns will fit along the inline axis. For example, I’m using column-width to specify that my columns are over 18rem. A browser creates as many 18rem columns as possible to fit and then shares any remaining space between them. CodePen Embed Fallback Then, there is the gutter (or column-gap) between columns, which you can specify using any length unit. I prefer using rem units to maintain the gutters’ relationship to the text size, but if your gutters need to be 1em, you can leave this out, as that’s a browser’s default gap. CodePen Embed Fallback The final column property is that divider (or column-rule) to the gutters, which adds visual separation between columns. Again, you can set a thickness and use border-style values like dashed, dotted, and solid. CodePen Embed Fallback These examples will be seen whenever you encounter a Multi-Column Layout tutorial, including CSS-Tricks’ own Almanac. The Multi-Column Layout syntax is one of the simplest in the suite of CSS layout tools, which is another reason why there are few reasons not to use it. Multi-Column Layout is even more relevant today When I wrote Transcending CSS and first explained the emerging Multi-Column Layout, there were no rem or viewport units, no :has() or other advanced selectors, no container queries, and no routine use of media queries because responsive design hadn’t been invented. We didn’t have calc() or clamp() for adjusting text sizes, and there was no CSS Grid or Flexible Box Layout for precise control over a layout. Now we do, and all these properties help to make Multi-Column Layout even more relevant today. Now, you can use rem or viewport units combined with calc() and clamp() to adapt the text size inside CSS Columns. You can use :has() to specify when columns are created, depending on the type of content they contain. Or you might use container queries to implement several columns only when a container is large enough to display them. Of course, you can also combine a Multi-Column Layout with CSS Grid or Flexible Box Layout for even more imaginative layout designs. Using Multi-Column Layout today Patty Meltt is an up-and-coming country music sensation. She’s not real, but the challenges of designing and developing websites like hers are. My challenge was to implement a flexible article layout without media queries which adapts not only to screen size but also whether or not a <figure> is present. To improve the readability of running text in what would potentially be too-long lines, it should be set in columns to narrow the measure. And, as a final touch, the text size should adapt to the width of the container, not the viewport. Article with no <figure> element. What would potentially be too-long lines of text are set in columns to improve readability by narrowing the measure. Article containing a <figure> element. No column text is needed for this narrower measure. The HTML for this layout is rudimentary. One <section>, one <main>, and one <figure> (or not:) <section> <main> <h1>About Patty</h1> <p>…</p> </main> <figure> <img> </figure> </section> I started by adding Multi-Column Layout styles to the <main> element using the column-width property to set the width of each column to 40ch (characters). The max-width and automatic inline margins reduce the content width and center it in the viewport: main { margin-inline: auto; max-width: 100ch; column-width: 40ch; column-gap: 3rem; column-rule: .5px solid #98838F; } Next, I applied a flexible box layout to the <section> only if it :has() a direct descendant which is a <figure>: section:has(> figure) { display: flex; flex-wrap: wrap; gap: 0 3rem; } This next min-width: min(100%, 30rem) — applied to both the <main> and <figure> — is a combination of the min-width property and the min() CSS function. The min() function allows you to specify two or more values, and a browser will choose the smallest value from them. This is incredibly useful for responsive layouts where you want to control the size of an element based on different conditions: section:has(> figure) main { flex: 1; margin-inline: 0; min-width: min(100%, 30rem); } section:has(> figure) figure { flex: 4; min-width: min(100%, 30rem); } What’s efficient about this implementation is that Multi-Column Layout styles are applied throughout, with no need for media queries to switch them on or off. Adjusting text size in relation to column width helps improve readability. This has only recently become easy to implement with the introduction of container queries, their associated values including cqi, cqw, cqmin, and cqmax. And the clamp() function. Fortunately, you don’t have to work out these text sizes manually as ClearLeft’s Utopia will do the job for you. My headlines and paragraph sizes are clamped to their minimum and maximum rem sizes and between them text is fluid depending on their container’s inline size: h1 { font-size: clamp(5.6526rem, 5.4068rem + 1.2288cqi, 6.3592rem); } h2 { font-size: clamp(1.9994rem, 1.9125rem + 0.4347cqi, 2.2493rem); } p { font-size: clamp(1rem, 0.9565rem + 0.2174cqi, 1.125rem); } So, to specify the <main> as the container on which those text sizes are based, I applied a container query for its inline size: main { container-type: inline-size; } Open the final result in a desktop browser, when you’re in front of one. It’s a flexible article layout without media queries which adapts to screen size and the presence of a <figure>. Multi-Column Layout sets text in columns to narrow the measure and the text size adapts to the width of its container, not the viewport. CodePen Embed Fallback Modern CSS is solving many prior problems Structure content with spanning elements which will restart the flow of columns and prevent people from scrolling long distances. Prevent figures from dividing their images and captions between columns. Almost every article I’ve ever read about Multi-Column Layout focuses on its flaws, especially usability. CSS-Tricks’ own Geoff Graham even mentioned the scrolling up and down issue when he asked, “When Do You Use CSS Columns?” Fortunately, the column-span property — which enables headlines, images, and quotes to span columns, resets the column flow, and instantly improves readability — now has solid support in browsers: h1, h2, blockquote { column-span: all; } But the solution to the scrolling up and down issue isn’t purely technical. It also requires content design. This means that content creators and designers must think carefully about the frequency and type of spanning elements, dividing a Multi-Column Layout into shallower sections, reducing the need to scroll and improving someone’s reading experience. Another prior problem was preventing headlines from becoming detached from their content and figures, dividing their images and captions between columns. Thankfully, the break-after property now also has widespread support, so orphaned images and captions are now a thing of the past: figure { break-after: column; } Open this final example in a desktop browser: CodePen Embed Fallback You should take a fresh look at Multi-Column Layout Multi-Column Layout isn’t a shiny new tool. In fact, it remains one of the most underused layout tools in CSS. It’s had, and still has, plenty of problems, but they haven’t reduced its usefulness or its ability to add an extra level of refinement to a product or website’s design. Whether you haven’t used Multi-Column Layout in a while or maybe have never tried it, now’s the time to take a fresh look at Multi-Column Layout. Revisiting CSS Multi-Column Layout originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  9. By: Janus Atienza Sun, 26 Jan 2025 16:06:55 +0000 Kali Linux is a Debian-based, open-source operating system that’s ideal for penetration testing, reverse engineering, security auditing, and computer forensics. It’s a rolling release model, as multiple updates of the OS are available in a year, offering you access to a pool of advanced tools that keep your software secure. But how to update Kali Linux to the latest version to avoid risks and compatibility issues? To help you in this regard, we are going to discuss the step-by-step process of updating Kali Linux and its benefits. Let’s begin! How to Update Kali Linux: Step-by-Step Guide Being hired to build smart solutions, a lot of custom IoT development professionals use Kali Linux for advanced penetration testing and even reverse engineering. However, it is important to keep it updated to avoid vulnerabilities. Before starting with how to update the Kali Linux process, you must have a stable internet connection and administrative rights. Here are the steps you can follow for this: Step 1: Check Resources List File The Linux Kali package manager fetches updates from the repository, so you first need to make sure that the system’s repository list is properly configured and aligned. Here’s how to check it: Open the terminal and run the following command to access the resources list file: http://kali.download/kali The output will display this list if your system is using the Kali Linux rolling release repository: deb http://kali.download/kali kali-rolling main contrib non-free non-free-firmware If the file is empty or has incorrect entries, you can edit it using editors like Nano or vim. Once you are sure that the list has only official and correct Kali Linux entries, save and close the editor. Step 2: Update the Package Information The next step is to update the package information using the repository list so the Kali Linux system knows about all the latest versions and updates available. The steps for that are: In the terminal, run this given command: sudo apt update This command updates the system’s package index to the latest repository information. You also see a list of packages being checked and their status (available for upgrade or not). Note: It only extracts the list of new packages and doesn’t install or update them! Step 3: Do a System Upgrade In how to update Kali Linux, the third step involves performing a system upgrade to install the latest versions and updates. Run the apt upgrade command to update all the installed packages to their latest version. However, unlike a full system upgrade, this command doesn’t remove or install any package from the system. You can then use the apt full-upgrade that upgrades all the packages and even add or remove some to keep your system running smoothly. The apt dist-upgrade is given when you want to handle package dependency changes, remove obsolete packages, and add new ones. Review all the changes that the commands have made and confirm the upgrade. Step 4: Get Rid of Unnecessary Packages Over time, useless files can accumulate in your system, taking up valuable disc space. You should get rid of them to declutter the system and also enhance the overall storage. Here are the steps for that: To remove the leftover packages, run the command: sudo apt autoremove -y Cached files also take up a lot of disc space, and you can remove them via the following command: sudo apt autoclean Step 5: Double-Check the Update Once you are all done installing the latest software, you should double-check to see if the system is actually running the upgrade. For this, give the command: cat /etc/os-release You can then see operating system information like version details and release date. Step 6: It’s Time to Reboot the System Well, this step is optional, but we suggest rebooting Kali Linux to ensure that the system is running the latest version and that all changes are fully applied. You can then perform tasks like security testing of custom IoT development processes. The command for this is: sudo reboot Why Update Kali Linux to the Latest Version? Software development and deployment trends are changing quickly. Now that you know how to update and upgrade Kali Linux, you must be wondering why you should update the system and what its impacts are. If so, here are some compelling reasons: Security Fixes and Patches Cybercrimes are quickly increasing, and statistics show that 43% of organizations lose existing customers because of cyber attacks. Additionally, individuals lose around $318 billion to cybercrime. However, when you update to the latest version of Kali Linux, there are advanced security fixes and patches. They remove all system vulnerabilities and make sure that professionals don’t fall victim to such malicious attempts. Access to New Tools and Features Kali Linux offers many features and tools like Metasploit, Nmap, and others, and they receive consistent updates from their developers. So, upgrading the OS assures that you are using the latest version of all pre-installed tools. You enjoy better functionality and improved system performance that make your daily tasks more efficient. For instance, the updated version of Nmap has fast scanning capabilities that pave the way for quick security auditing and troubleshooting. Compatibility with New Technologies Technology is evolving, and new protocols and software are introduced every day. The developers behind Kali Linux are well aware of these shifts. They are pushing regular updates that support these newer technologies for better system compatibility. Conclusion The process of how to update Kali Linux becomes easy if you are aware of the correct commands and understand the difference between upgrade options. Most importantly, don’t forget to reboot your system after a major update like Kernel to make sure that changes are configured properly. FAQs How often should I update Kali Linux? It’s advisable to update Kali Linux at least once a week or whenever there are new system updates. The purpose is to make sure that the system is secure and has all the latest features by receiving security patches and addressing all vulnerabilities. Can I update Kali Linux without using the terminal? No, you cannot update Kali Linux without using the terminal. To update the system, you can use the apt and apt-get commands. The steps involved in this process include checking the source file, updating the package repository, and upgrading the system. Is Kali Linux good for learning cyber security? Yes, Kali Linux is a good tool for learning cyber security. It has a range of tools for penetration testing, network security, analysis, and vulnerability scanning. The post How to Update Kali Linux? appeared first on Unixmen.
  10. By: Janus Atienza Sun, 26 Jan 2025 00:06:01 +0000 AI-powered tools are changing the software development scene as we speak. AI assistants can not only help with coding, using advanced machine learning algorithms to improve their service, but they can also help with code refactoring, testing, and bug detection. Tools like GitHub, Tanbine, and Copilot aim to automate various processes, allowing developers more free time for other, more creative tasks. Of course, implementing AI tools takes time and careful risk assessment because various factors need to be taken into consideration. Let’s review some of the most popular automation tools available for Linux. Why Use AI-Powered Software Tools in Linux? AI is being widely used across various spheres of our lives with businesses utilizing the power of Artificial Intelligence to create new services and products. Even sites like Depositphotos have started offering AI services to create exclusive licensed photos that can be used anywhere – on websites, in advertising, design, and print media. Naturally, software development teams and Linux users have also started implementing AI-powered tools to improve their workflow. Here are some of the benefits of using such tools: An improved user experience. Fewer human errors in various processes. Automation of repetitive tasks boosts overall productivity. New features become available. Innovative problem-solving. Top AI Automation Tools for Linux Streamlining processes can greatly increase productivity, allowing developers and Linux users to delegate repetitive tasks to AI-powered software. They offer innovative solutions while optimizing different parts of the development process. Let’s review some of them. 1. GitHub Copilot Just a few years ago no one could’ve imagined that coding could be done by an AI algorithm. This AI-powered software can predict the completion of the code that’s being created, offering different functions and snippets on the go. GitHub Copilot can become an invaluable tool for both expert and novice coders. The algorithms can understand the code that’s being written using OpenAI’s Codex model. It supports various programming languages and can be easily integrated with the majority of IDEs. One of its key benefits is code suggestion based on the context of what’s being created. 2. DeepCode One of the biggest issues all developers face when writing code is potential bugs. This is where an AI-powered code review tool can come in handy. While it won’t help you create the code, it will look for vulnerabilities inside your project, giving context-based feedback and a variety of suggestions to fix the bugs found by the program. Thus, it can help developers improve the quality of their work. DeepCode uses machine learning to become a better help over time, offering improved suggestions as it learns more about the type of work done by the developer. This tool can easily integrate with GitLab, GitHub, and Bitbucket. 3. Tabnine Do you want an AI-powered tool that can actually learn from your coding style and offer suggestions based on it? Tabnine can do exactly that, predicting functions and offering snippets of code based on what you’re writing. It can be customized for a variety of needs and operations while supporting 30 programming languages. You can use this tool offline for improved security. 4. CircleCl This is a powerful continuous integration and continuous delivery platform that helps automate software development operations. It helps engineering teams build code easily, offering automatic tests at each stage of the process, whenever a change is implemented in the system. You can develop your app fast and easily with CirlceCL’s automated testing that involves mobile, serverless, API, web, and AI frameworks. This is the CI/CD expert who will help you significantly reduce testing time and build simple and stable systems. 5. Selenium This is one of the most popular testing tools used by developers all over the world. It’s compatible across various platforms, including Linux, due to the open-source nature of this framework. It offers a seamless process of generating and managing test cases, as well as compiling project reports. It can collaborate with continuous automated testing tools for better results. 6. Code Intelligence This is yet another tool capable of analyzing the source code to detect bugs and vulnerabilities without human supervision. It can find inconsistencies that are often missed by other testing methods, allowing the developing teams to resolve issues before the software is released. This tool works autonomously and simplifies root cause analysis. It utilizes self-learning AI capabilities to boost the testing process and swiftly pinpoints the line of code that contains the bug. 7. ONLYOFFICE Docs This open-source office suite allows real-time collaboration and offers a few interesting options when it comes to AI. You can install a plugin and get access to ChatGPT for free and use its features while creating a document. Some of the most handy ones include translation, spellcheck, grammar correction, word analysis, and text generation. You can also generate images for your documents and have a chat with ChatGPT while working on your project. Conclusion When it comes to the Linux operating system, there are numerous AI-powered automation tools you can try. A lot of them are used in software development to improve the code-writing process and allow developers to have more free time for other tasks. AI tools can utilize machine learning to provide you with better services while offering a variety of ways to streamline your workflow. Tools such as DeepCode, Tabnine, GitHub Copilot, and Selenium can look for solutions whenever you’re facing issues with your software. These programs will also offer snippets of code on the go while checking your project for bugs. The post How AI is Revolutionizing Linux System Administration: Tools and Techniques for Automation appeared first on Unixmen.
  11. By: Janus Atienza Sat, 25 Jan 2025 23:26:38 +0000 In today’s digital age, safeguarding your communication is paramount. Email encryption serves as a crucial tool to protect sensitive data from unauthorized access. Linux users, known for their preference for open-source solutions, must embrace encryption to ensure privacy and security. With increasing cyber threats, the need for secure email communications has never been more critical. Email encryption acts as a protective shield, ensuring that only intended recipients can read the content of your emails. For Linux users, employing encryption techniques not only enhances personal data protection but also aligns with the ethos of secure and open-source computing. This guide will walk you through the essentials of setting up email encryption on Linux and how you can integrate advanced solutions to bolster your security. Setting up email encryption on Linux Implementing email encryption on Linux can be straightforward with the right tools. Popular email clients like Thunderbird and Evolution support OpenPGP and S/MIME protocols for encrypting emails. Begin by installing GnuPG, an open-source software that provides cryptographic privacy and authentication. Once installed, generate a pair of keys—a public key to share with those you communicate with and a private key that remains confidential to you. Configure your chosen email client to use these keys for encrypting and decrypting emails. The interface typically offers user-friendly options to enable encryption settings directly within the email composition window. To further assist in this setup, many online tutorials offer detailed guides complete with screenshots to ease the process for beginners. Additionally, staying updated with the latest software versions is recommended to ensure optimal security features are in place. How email encryption works Email encryption is a process that transforms readable text into a scrambled format that can only be decoded by the intended recipient. It is essential for maintaining privacy and security in digital communications. As technology advances, so do the methods used by cybercriminals to intercept sensitive information. Thus, understanding the principles of email encryption becomes crucial. The basic principle of encryption involves using keys—a public key for encrypting emails and a private key for decrypting them. This ensures that even if emails are intercepted during transmission, they remain unreadable without the correct decryption key. Whether you’re using email services like Gmail or Outlook, integrating encryption can significantly reduce the risk of data breaches. Many email providers offer built-in encryption features, but for Linux users seeking more control, there are numerous open-source tools available. Email encryption from Trustifi provides an additional layer of security by incorporating advanced AI-powered solutions into your existing setup. Integrating advanced encryption solutions For those seeking enhanced security measures beyond standard practices, integrating solutions like Trustifi into your Linux-based email clients can be highly beneficial. Trustifi offers services such as inbound threat protection and outbound email encryption powered by AI technology. The integration process involves installing Trustifi’s plugin or API into your existing email infrastructure. This enables comprehensive protection against potential threats while ensuring that encrypted communications are seamless and efficient. With Trustifi’s advanced algorithms, businesses can rest assured that their communications are safeguarded against both current and emerging cyber threats. This approach not only protects sensitive data but also simplifies compliance with regulatory standards regarding data protection and privacy. Businesses leveraging such tools position themselves better in preventing data breaches and maintaining customer trust. Best practices for secure email communication Beyond technical setups, maintaining secure email practices is equally important. Start by using strong passwords that combine letters, numbers, and symbols; avoid easily guessed phrases or patterns. Enabling two-factor authentication adds another layer of security by requiring additional verification steps before accessing accounts. Regularly updating software helps protect against vulnerabilities that hackers might exploit. Many systems offer automatic updates; however, manually checking for updates can ensure no critical patches are missed. Staying informed about the latest security threats allows users to adapt their strategies accordingly. Ultimately, being proactive about security measures cultivates a safer digital environment for both personal and professional communications. Adopting these practices alongside robust encryption technologies ensures comprehensive protection against unauthorized access. The post Mastering email encryption on Linux appeared first on Unixmen.
  12. by: Preethi Fri, 24 Jan 2025 14:59:25 +0000 When it comes to positioning elements on a page, including text, there are many ways to go about it in CSS — the literal position property with corresponding inset-* properties, translate, margin, anchor() (limited browser support at the moment), and so forth. The offset property is another one that belongs in that list. The offset property is typically used for animating an element along a predetermined path. For instance, the square in the following example traverses a circular path: <div class="circle"> <div class="square"></div> </div> @property --p { syntax: '<percentage>'; inherits: false; initial-value: 0%; } .square { offset: top 50% right 50% circle(50%) var(--p); transition: --p 1s linear; /* Equivalent to: offset-position: top 50% right 50%; offset-path: circle(50%); offset-distance: var(--p); */ /* etc. */ } .circle:hover .square{ --p: 100%; } CodePen Embed Fallback A registered CSS custom property (--p) is used to set and animate the offset distance of the square element. The animation is possible because an element can be positioned at any point in a given path using offset. and maybe you didn’t know this, but offset is a shorthand property comprised of the following constituent properties: offset-position: The path’s starting point offset-path: The shape along which the element can be moved offset-distance: A distance along the path on which the element is moved offset-rotate: The rotation angle of an element relative to its anchor point and offset path offset-anchor: A position within the element that’s aligned to the path The offset-path property is the one that’s important to what we’re trying to achieve. It accepts a shape value — including SVG shapes or CSS shape functions — as well as reference boxes of the containing element to create the path. Reference boxes? Those are an element’s dimensions according to the CSS Box Model, including content-box, padding-box, border-box, as well as SVG contexts, such as the view-box, fill-box, and stroke-box. These simplify how we position elements along the edges of their containing elements. Here’s an example: all the small squares below are placed in the default top-left corner of their containing elements’ content-box. In contrast, the small circles are positioned along the top-right corner (25% into their containing elements’ square perimeter) of the content-box, border-box, and padding-box, respectively. <div class="big"> <div class="small circle"></div> <div class="small square"></div> <p>She sells sea shells by the seashore</p> </div> <div class="big"> <div class="small circle"></div> <div class="small square"></div> <p>She sells sea shells by the seashore</p> </div> <div class="big"> <div class="small circle"></div> <div class="small square"></div> <p>She sells sea shells by the seashore</p> </div> .small { /* etc. */ position: absolute; &.square { offset: content-box; border-radius: 4px; } &.circle { border-radius: 50%; } } .big { /* etc. */ contain: layout; /* (or position: relative) */ &:nth-of-type(1) { .circle { offset: content-box 25%; } } &:nth-of-type(2) { border: 20px solid rgb(170 232 251); .circle { offset: border-box 25%; } } &:nth-of-type(3) { padding: 20px; .circle { offset: padding-box 25%; } } } CodePen Embed Fallback Note: You can separate the element’s offset-positioned layout context if you don’t want to allocated space for it inside its containing parent element. That’s how I’ve approached it in the example above so that the paragraph text inside can sit flush against the edges. As a result, the offset positioned elements (small squares and circles) are given their own contexts using position: absolute, which removes them from the normal document flow. This method, positioning relative to reference boxes, makes it easy to place elements like notification dots and ornamental ribbon tips along the periphery of some UI module. It further simplifies the placement of texts along a containing block’s edges, as offset can also rotate elements along the path, thanks to offset-rotate. A simple example shows the date of an article placed at a block’s right edge: <article> <h1>The Irreplaceable Value of Human Decision-Making in the Age of AI</h1> <!-- paragraphs --> <div class="date">Published on 11<sup>th</sup> Dec</div> <cite>An excerpt from the HBR article</cite> </article> article { container-type: inline-size; /* etc. */ } .date { offset: padding-box 100cqw 90deg / left 0 bottom -10px; /* Equivalent to: offset-path: padding-box; offset-distance: 100cqw; (100% of the container element's width) offset-rotate: 90deg; offset-anchor: left 0 bottom -10px; */ } CodePen Embed Fallback As we just saw, using the offset property with a reference box path and container units is even more efficient — you can easily set the offset distance based on the containing element’s width or height. I’ll include a reference for learning more about container queries and container query units in the “Further Reading” section at the end of this article. There’s also the offset-anchor property that’s used in that last example. It provides the anchor for the element’s displacement and rotation — for instance, the 90 degree rotation in the example happens from the element’s bottom-left corner. The offset-anchor property can also be used to move the element either inward or outward from the reference box by adjusting inset-* values — for instance, the bottom -10px arguments pull the element’s bottom edge outwards from its containing element’s padding-box. This enhances the precision of placements, also demonstrated below. <figure> <div class="big">4</div> <div class="small">number four</div> </figure> .small { width: max-content; offset: content-box 90% -54deg / center -3rem; /* Equivalent to: offset-path: content-box; offset-distance: 90%; offset-rotate: -54deg; offset-anchor: center -3rem; */ font-size: 1.5rem; color: navy; } CodePen Embed Fallback As shown at the beginning of the article, offset positioning is animateable, which allows for dynamic design effects, like this: <article> <figure> <div class="small one">17<sup>th</sup> Jan. 2025</div> <span class="big">Seminar<br>on<br>Literature</span> <div class="small two">Tickets Available</div> </figure> </article> @property --d { syntax: "<percentage>"; inherits: false; initial-value: 0%; } .small { /* other style rules */ offset: content-box var(--d) 0deg / left center; /* Equivalent to: offset-path: content-box; offset-distance: var(--d); offset-rotate: 0deg; offset-anchor: left center; */ transition: --d .2s linear; &.one { --d: 2%; } &.two { --d: 70%; } } article:hover figure { .one { --d: 15%; } .two { --d: 80%; } } CodePen Embed Fallback Wrapping up Whether for graphic designs like text along borders, textual annotations, or even dynamic texts like error messaging, CSS offset is an easy-to-use option to achieve all of that. We can position the elements along the reference boxes of their containing parent elements, rotate them, and even add animation if needed. Further reading The CSS offset-path property: CSS-Tricks, MDN The CSS offset-anchor property: CSS-Tricks, MDN Container query length units: CSS-Tricks, MDN The @property at-rule: CSS-Tricks, web.dev The CSS Box Model: CSS-Tricks SVG Reference Boxes: W3C Positioning Text Around Elements With CSS Offset originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  13. by: Geoff Graham Thu, 23 Jan 2025 17:21:15 +0000 I was reading through Juan’s recent Almanac entry for the @counter-style at-rule and I’ll be darned if he didn’t uncover and unpack some extremely interesting things that we can do to style lists, notably the list marker. You’re probably already aware of the ::marker pseudo-element. You’ve more than likely dabbled with custom counters using counter-reset and counter-increment. Or maybe your way of doing things is to wipe out the list-style (careful when doing that!) and hand-roll a marker on the list item’s ::before pseudo. But have you toyed around with @counter-style? Turns out it does a lot of heavy lifting and opens up new ways of working with lists and list markers. You can style the marker of just one list item This is called a “fixed” system set to a specific item. @counter-style style-fourth-item { system: fixed 4; symbols: "💠"; suffix: " "; } li { list-style: style-fourth-item; } CodePen Embed Fallback You can assign characters to specific markers If you go with an “additive” system, then you can define which symbols belong to which list items. @counter-style dice { system: additive; additive-symbols: 6 "⚅", 5 "⚄", 4 "⚃", 3 "⚂", 2 "⚁", 1 "⚀"; suffix: " "; } li { list-style: dice; } CodePen Embed Fallback Notice how the system repeats once it reaches the end of the cycle and begins a new series based on the first item in the pattern. So, for example, there are six sides to typical dice and we start rolling two dice on the seventh list item, totaling seven. You can add a prefix and suffix to list markers A long while back, Chris showed off a way to insert punctuation at the end of a list marker using the list item’s ::before pseudo: ol { list-style: none; counter-reset: my-awesome-counter; li { counter-increment: my-awesome-counter; &::before { content: counter(my-awesome-counter) ") "; } } } That’s much easier these days with @counter-styles: @counter-style parentheses { system: extends decimal; prefix: "("; suffix: ") "; } CodePen Embed Fallback You can style multiple ranges of list items Let’s say you have a list of 10 items but you only want to style items 1-3. We can set a range for that: @counter-style single-range { system: extends upper-roman; suffix: "."; range: 1 3; } li { list-style: single-range; } CodePen Embed Fallback We can even extend our own dice example from earlier: @counter-style dice { system: additive; additive-symbols: 6 "⚅", 5 "⚄", 4 "⚃", 3 "⚂", 2 "⚁", 1 "⚀"; suffix: " "; } @counter-style single-range { system: extends dice; suffix: "."; range: 1 3; } li { list-style: single-range; } CodePen Embed Fallback Another way to do that is to use the infinite keyword as the first value: @counter-style dice { system: additive; additive-symbols: 6 "⚅", 5 "⚄", 4 "⚃", 3 "⚂", 2 "⚁", 1 "⚀"; suffix: " "; } @counter-style single-range { system: extends dice; suffix: "."; range: infinite 3; } li { list-style: single-range; } Speaking of infinite, you can set it as the second value and it will count up infinitely for as many list items as you have. Maybe you want to style two ranges at a time and include items 6-9. I’m not sure why the heck you’d want to do that but I’m sure you (or your HIPPO) have got good reasons. @counter-style dice { system: additive; additive-symbols: 6 "⚅", 5 "⚄", 4 "⚃", 3 "⚂", 2 "⚁", 1 "⚀"; suffix: " "; } @counter-style multiple-ranges { system: extends dice; suffix: "."; range: 1 3, 6 9; } li { list-style: multiple-ranges; } CodePen Embed Fallback You can add padding around the list markers You ever run into a situation where your list markers are unevenly aligned? That usually happens when going from, say, a single digit to a double-digit. You can pad the marker with extra characters to line things up. /* adds leading zeroes to list item markers */ @counter-style zero-padded-example { system: extends decimal; pad: 3 "0"; } Now the markers will always be aligned… well, up to 999 items. CodePen Embed Fallback That’s it! I just thought those were some pretty interesting ways to work with list markers in CSS that run counter (get it?!) to how I’ve traditionally approached this sort of thing. And with @counter-style becoming Baseline “newly available” in September 2023, it’s well-supported in browsers. Some Things You Might Not Know About Custom Counter Styles originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  14. by: Abhishek Kumar Thu, 23 Jan 2025 11:22:15 +0530 Imagine this: You’ve deployed a handful of Docker containers to power your favorite applications, maybe a self-hosted Nextcloud for your files, a Pi-hole for ad-blocking, or even a media server like Jellyfin. Everything is running like a charm, but then you hit a common snag: keeping those containers updated. When a new image is released, you’ll need to manually pull it, stop the running container, recreate it with the updated image, and hope everything works as expected. Multiply that by the number of containers you’re running, and it’s clear how this quickly becomes a tedious and time-consuming chore. But there’s more at stake than just convenience. Skipping updates or delaying them for too long can lead to outdated software running in your containers, which often means unpatched vulnerabilities. These can become a serious security risk, especially if you’re hosting services exposed to the internet. This is where Watchtower steps in, a tool designed to take the hassle out of container updates by automating the entire process. Whether you’re running a homelab or managing a production environment, Watchtower ensures your containers are always up-to-date and secure, all with minimal effort on your part. What is Watchtower?Watchtower is an open-source tool that automatically monitors your Docker containers and updates them whenever a new version of their image is available. It keeps your setup up-to-date, saving time and reducing the risk of running outdated containers. But it’s not just a "set it and forget it" solution, it’s also highly customizable, allowing you to tailor its behavior to fit your workflow. Whether you prefer full automation or staying in control of updates, Watchtower has you covered. How does it work?Watchtower works by periodically checking for updates to the images of your running containers. When it detects a newer version, it pulls the updated image, stops the current container, and starts a new one using the updated image. The best part? It maintains your existing container configuration, including port bindings, volume mounts, and environment variables. If your containers depend on each other, Watchtower handles the update process in the correct order to avoid downtime. Deploying watchtowerSince you’re reading this article, I’ll assume you already have some sort of homelab or Docker setup where you want to automate container updates. That means I won’t be covering Docker installation here. When it comes to deploying Watchtower, it can be done in two ways: Docker runIf you’re just trying it out or want a straightforward deployment, you can run the following command: docker run -d --name watchtower -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower This will spin up a Watchtower container that monitors your running containers and updates them automatically. But here’s the thing, I’m not a fan of the docker run command. It’s quick, sure, but I prefer stack approach rather than cramming everything into a single command. Docker composeIf you facny using Docker Compose to run Watchtower, here’s a minimal configuration that replicates the docker run command above: version: "3.8" services: watchtower: image: containrrr/watchtower container_name: watchtower restart: always volumes: - /var/run/docker.sock:/var/run/docker.sock To start Watchtower using this configuration, save it as docker-compose.yml and run: docker-compose up -d This will give you the same functionality as the docker run command, but in a cleaner, more manageable format. Customizing watchtower with environment variablesRunning Watchtower plainly is all good, but we can make it even better with environment variables and command arguments. Personally, I don’t like giving full autonomy to one service to automatically make changes on my behalf. Since I have a pretty decent homelab running crucial containers, I prefer using Watchtower to notify me about updates rather than updating everything automatically. This ensures that I remain in control, especially for containers that are finicky or require a perfect pairing with their databases. Sneak peak into my homelabTake a look at my homelab setup: it’s mostly CMS containers for myself and for clients, and some of them can behave unpredictably if not updated carefully. So instead of letting Watchtower update everything, I configure it to provide insights and alerts, and then I manually decide which updates to apply. To achieve this, we’ll add the following environment variables to our Docker Compose file: .tg {border-collapse:collapse;border-spacing:0;} .tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px; overflow:hidden;padding:10px 5px;word-break:normal;} .tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px; font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;} .tg .tg-0pky{border-color:inherit;text-align:left;vertical-align:top} Environment Variable Description WATCHTOWER_CLEANUP Removes old images after updates, keeping your Docker host clean. WATCHTOWER_POLL_INTERVAL Sets how often Watchtower checks for updates (in seconds). One hour (3600 seconds) is a good balance. WATCHTOWER_LABEL_ENABLE Updates only containers with specific labels, giving you granular control. WATCHTOWER_DEBUG Enables detailed logs, which can be invaluable for troubleshooting. WATCHTOWER_NOTIFICATIONS Configures the notification method (e.g., email) to keep you informed about updates. WATCHTOWER_NOTIFICATION_EMAIL_FROM The email address from which notifications will be sent. WATCHTOWER_NOTIFICATION_EMAIL_TO The recipient email address for update notifications. WATCHTOWER_NOTIFICATION_EMAIL_SERVER SMTP server address for sending notifications. WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PORT Port used by the SMTP server (commonly 587 for TLS). WATCHTOWER_NOTIFICATION_EMAIL_USERNAME SMTP server username for authentication. WATCHTOWER_NOTIFICATION_EMAIL_PASSWORD SMTP server password for authentication. Here’s how the updated docker-compose.yml file would look: version: "3.8" services: watchtower: image: containrrr/watchtower container_name: watchtower restart: always environment: WATCHTOWER_CLEANUP: "true" WATCHTOWER_POLL_INTERVAL: "3600" WATCHTOWER_LABEL_ENABLE: "true" WATCHTOWER_DEBUG: "true" WATCHTOWER_NOTIFICATIONS: "email" WATCHTOWER_NOTIFICATION_EMAIL_FROM: "admin@example.com" WATCHTOWER_NOTIFICATION_EMAIL_TO: "notify@example.com" WATCHTOWER_NOTIFICATION_EMAIL_SERVER: "smtp.example.com" WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PORT: "587" WATCHTOWER_NOTIFICATION_EMAIL_USERNAME: "your_email_username" WATCHTOWER_NOTIFICATION_EMAIL_PASSWORD: "your_email_password" volumes: - /var/run/docker.sock:/var/run/docker.sock I like to put my credentials in a separate environment file.Once you run the Watchtower container for the first time, you'll receive an initial email confirming that the service is up and running. Here's an example of what that email might look like: After some time, as Watchtower analyzes your setup and scans the running containers, it will notify you if it detects any updates available for your containers. These notifications are sent in real-time and look something like this: This feature ensures you're always in the loop about potential updates without having to check manually. Final thoughts I’m really impressed by Watchtower and have been using it for a month now. I recommend, if possible, to play around with it in an isolated environment first, that’s what I did before deploying it in my homelab. The email notification feature is great, but my inbox now looks totally filled with Watchtower emails, so I might create a rule to manage them better. Overall, no complaints so far! I find it better than the Docker Compose method we discussed earlier. Updating Docker Containers With Zero DowntimeA step by step methodology that can be very helpful in your day to day DevOps activities without sacrificing invaluable uptime.Linux HandbookAvimanyu BandyopadhyayWhat about you? What do you use to update your containers? If you’ve tried Watchtower, share your experience, anything I should be mindful of? Let us know in the comments!
  15. by: Geoff Graham Tue, 21 Jan 2025 14:21:32 +0000 Chris wrote about “Likes” pages a long while back. The idea is rather simple: “Like” an item in your RSS reader and display it in a feed of other liked items. The little example Chris made is still really good. CodePen Embed Fallback There were two things Chris noted at the time. One was that he used a public CORS proxy that he wouldn’t use in a production environment. Good idea to nix that, security and all. The other was that he’d consider using WordPress transients to fetch and cache the data to work around CORS. I decided to do that! The result is this WordPress block I can drop right in here. I’ll plop it in a <details> to keep things brief. Open Starred Feed Link on 1/16/2025Don’t Wrap Figure in a Linkadrianroselli.com In my post Brief Note on Figure and Figcaption Support I demonstrate how, when encountering a figure with a screen reader, you won’t hear everything announced at once: No screen reader combo treats the caption as the accessible name nor accessible description, not even for an…Link on 1/15/2025Learning HTML is the best investment I ever didchristianheilmann.com One of the running jokes and/or discussion I am sick and tired of is people belittling HTML. Yes, HTML is not a programming language. No, HTML should not just be a compilation target. Learning HTML is a solid investment and not hard to do. I am not…Link on 1/14/2025Open Props UInerdy.dev Presenting Open Props UI!…Link on 1/12/2025Gotchas in Naming CSS View Transitionsblog.jim-nielsen.comI’m playing with making cross-document view transitions work on this blog. Nothing fancy. Mostly copying how Dave Rupert does it on his site where you get a cross-fade animation on the whole page generally, and a little position animation on the page title specifically. $item['title'], 'link' => $item['link'], 'pubDate' => $item['pubDate'], 'description' => $item['description'], ]; } set_transient($transient_key, $items, 12 * HOUR_IN_SECONDS); return new WP_REST_Response($items, 200); } add_action('rest_api_init', function () { register_rest_route('custom/v1', '/fetch-data', [ 'methods' => 'GET', 'callback' => 'fetch_and_store_data', ]); }); Could this be refactored and written more efficiently? All signs point to yes. But here’s how I grokked it: function fetch_and_store_data() { } The function’s name can be anything. Naming is hard. The first two variables: $transient_key = 'fetched_data'; $cached_data = get_transient($transient_key); The $transient_key is simply a name that identifies the transient when we set it and get it. In fact, the $cached_data is the getter so that part’s done. Check! I only want the $cached_data if it exists, so there’s a check for that: if ($cached_data) { return new WP_REST_Response($cached_data, 200); } This also establishes a new response from the WordPress REST API, which is where the data is cached. Rather than pull the data directly from Feedbin, I’m pulling it and caching it in the REST API. This way, CORS is no longer an issue being that the starred items are now locally stored on my own domain. That’s where the wp_remote_get() function comes in to form that response from Feedbin as the origin: $response = wp_remote_get('https://feedbin.com/starred/a22c4101980b055d688e90512b083e8d.xml'); Similarly, I decided to throw an error if there’s no $response. That means there’s no freshly $cached_data and that’s something I want to know right away. if (is_wp_error($response)) { return new WP_REST_Response('Error fetching data', 500); } The bulk of the work is merely parsing the XML data I get back from Feedbin to JSON. This scours the XML and loops through each item to get its title, link, publish date, and description: $body = wp_remote_retrieve_body($response); $data = simplexml_load_string($body, 'SimpleXMLElement', LIBXML_NOCDATA); $json_data = json_encode($data); $array_data = json_decode($json_data, true); $items = []; foreach ($array_data['channel']['item'] as $item) { $items[] = [ 'title' => $item['title'], 'link' => $item['link'], 'pubDate' => $item['pubDate'], 'description' => $item['description'], ]; } “Description” is a loaded term. It could be the full body of a post or an excerpt — we don’t know until we get it! So, I’m splicing and trimming it in the block’s Edit component to stub it at no more than 50 words. There’s a little risk there because I’m rendering the HTML I get back from the API. Security, yes. But there’s also the chance I render an open tag without its closing counterpart, muffing up my layout. I know there are libraries to address that but I’m keeping things simple for now. Now it’s time to set the transient once things have been fetched and parsed: set_transient($transient_key, $items, 12 * HOUR_IN_SECONDS); The WordPress docs are great at explaining the set_transient() function. It takes three arguments, the first being the $transient_key that was named earlier to identify which transient is getting set. The other two: $value: This is the object we’re storing in the named transient. That’s the $items object handling all the parsing. $expiration: How long should this transient last? It wouldn’t be transient if it lingered around forever, so we set an amount of time expressed in seconds. Mine lingers for 12 hours before it expires and then updates the next time a visitor hits the page. OK, time to return the items from the REST API as a new response: return new WP_REST_Response($items, 200); That’s it! Well, at least for setting and getting the transient. The next thing I realized I needed was a custom REST API endpoint to call the data. I really had to lean on the WordPress docs to get this going: add_action('rest_api_init', function () { register_rest_route('custom/v1', '/fetch-data', [ 'methods' => 'GET', 'callback' => 'fetch_and_store_data', ]); }); That’s where I struggled most and felt like this all took wayyyyy too much time. Well, that and sparring with the block itself. I find it super hard to get the front and back end components to sync up and, honestly, a lot of that code looks super redundant if you were to scope it out. That’s another story altogether. Enjoy reading what we’re reading! I put a page together that pulls in the 10 most recent items with a link to subscribe to the full feed. Creating a “Starred” Feed originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  16. by: Chris Coyier Mon, 20 Jan 2025 16:31:11 +0000 HTML is fun to think about. The old classic battle of “HTML is a programming language” has surfaced in the pages of none other than WIRED magazine. I love this argument, not even for it’s merit, but for the absolutely certainty that you will get people coming out of the woodwork to tell you that HTML, is not, in fact, a programming language. Each of them will have their own exotic and deeply personal reasons why. I honestly don’t even care or believe their to be any truth to be found in the debate, but I find it fascinating as a social experiment. It’s like cracking an IE 6 “joke” at a conference. You will get laughs. I wrote a guest blog post Relatively New Things You Should Know about HTML Heading Into 2025 at the start of the year here which had me thinking about it anyway. So here’s more! You know there are those mailto: “protocol” style links you can use, like: <a href="mailto:chriscoyier@gmail.com">Email Chris</a> And they work fine. Or… mostly fine. They work if there is an email client registered on the device. That’s generally the case, but it’s not 100%. And there are more much more esoteric ones, as Brian Kardell writes: A tel: link on my Mac tries to open FaceTime. What does it do on a computer with no calling capability at all, like my daughter’s Fire tablet thingy? Nothing, probably. Just like clicking on a skype: link on my computer here, which doesn’t have Skype installed does: nothing. A semantic HTML link element that looks and clicks like any other link that does nothing is, well, it’s not good. Brian spells out a situation where it’s extra not good, where a link could say like “Call Pizza Parlor” with the actual phone number buried behind the scenes in HTML, whereas if it was just a phone number, mobile browser that support it would automatically turn it into a clickable link, which is surely better. Every once in a while I get excited about the prospect of writing HTML email with just regular ol’ semantic HTML that you’d write anywhere else. And to be fair: some people absolutely do that and it’s interesting to follow those developments. The last time I tried to get away with “no tables”, the #1 thing that stops me is that you can’t get a reasonable width and centered layout without them in old Outlook. Oh well, that’s the job sometimes. Ambiguity. That’s one thing that there is plenty of in HTML and I suspect people’s different brains handle it quite differently. Some people try something and it if works they are pleased with that and move on. “Works” being a bit subjective of course, since works on the exact browser you’re using at the time as a developer isn’t necessarily reflective of all users. Some people absolutely fret over the correct usage of HTML in all sorts of situations. That’s my kinda people. In Stephanie Eckles’ A Call for Consensus on HTML Semantics she lists all sorts of these ambiguities, honing in on particularly tricky ones where there are certainly multiple ways to approach it. While I’m OK with the freedom and some degree of ambiguity, I like to sweat the semantics and kinda do wish there were just emphatically right answers sometimes. Wanna know why hitting an exact markup pattern matters sometimes? Aside from good accessibility and potentially some SEO concern, sometimes you get good bonus behavior. Simon Willison blogged about Footnotes that work in RSS readers, which is one such situation, building on some thoughts and light research I had done. This is pretty niche, but if you do footnotes just exactly so you’ll get very nice hover behavior in NetNewsWire for footnotes, which happens to be an RSS reader that I like. They talk about paving the cowpaths in web standards. Meaning standardizing ideas when it’s obvious authors are doing it a bunch. I, for one, have certainly seen “spoilers” implemented quite a bit in different ways. Tracy Durnell wonders if we should just add it to HTML directly.
  17. Blogger posted a blog entry in Linux Tips
    by: Satoshi Nakamoto Sat, 18 Jan 2025 10:27:48 +0530 The pwd command in Linux, short for Print Working Directory, displays the absolute path of the current directory, helping users navigate the file system efficiently. It is one of the first commands you use when you start learning Linux. And if you are absolutely new, take advantage of this free course: Learn the Basic Linux Commands in an Hour [With Videos]Learn the basics of Linux commands in this crash course.Linux HandbookAbhishek Prakashpwd command syntaxLike other Linux commands, pwd also follows this syntax. pwd [OPTIONS]Here, you have [OPTIONS], which are used to modify the default behavior of the pwd command. If you don't use any options with the pwd command, it will show the physical path of the current working directory by default. Unlike many other Linux commands, pwd does not come with many flags and has only two important flags: Option Description -L Displays the logical current working directory, including symbolic links. -P Displays the physical current working directory, resolving symbolic links. --help Displays help information about the pwd command. --version Outputs version information of the pwd command. Now, let's take a look at the practical examples of the pwd command. 1. Display the current locationThis is what the pwd command is famous for, giving you the name of the directory where you are located or from where you are running the command. pwd2. Display the logical path including symbolic links If you want to display logical paths and symbolic links, all you have to do is execute the pwd command with the -L flag as shown here: pwd -LTo showcase its usage, I will need to go through multiple steps so stay with me. First, go to the tmp directory using the cd command as shown here: cd /tmpNow, let's create a symbolic link which is pointing to the /var/log directory: ln -s /var/log log_linkFinally, change your directory to log_link and use the pwd command with the -L flag: pwd -LIn the above steps, I went to the /tmp directory and then created a symbolic link which points to a specific location (/var/log) and then I used the pwd command and it successfully showed me the symbolic link. 3. Display physical path resolving symbolic links The pwd command is one of the ways to resolve symbolic links. Meaning, you'll see the destination directory where soft link points to. Use the -P flag: pwd -PI am going to use the symbolic link which I had created in the 2nd example. Here's what I did: Navigate to /tmp.Create a symbolic link (log_link) pointing to /var/log.Change into the symbolic link (cd log_link)Once you perform all the steps, you can check the real path of the symbolic link: pwd -P4. Use pwd command in shell scriptsTo get the current location in a bash shell script, you can store the value of the pwd command in a variable and later on print it as shown here: current_dir=$(pwd) echo "You are in $current_dir"Now, if you execute this shell script in your home directory like I did, you will get similar output to mine: Bonus: Know the previous working directoryThis is not exactly the use of the pwd command but it is somewhat related and interesting. There is an environment variable in Linux called OLDPWD which stores the previous working directory path. This means you can get the previous working directory by printing the value of this environment variable: echo "$OLDPWD"ConclusionThis was a quick tutorial on how you can use the pwd command in Linux where I went through syntax, options, and some practical examples of it. I hope you will find them helpful. If you have any queries or suggestions, leave us a comment.
  18. by: Temani Afif Fri, 17 Jan 2025 14:57:39 +0000 You have for sure heard about the new CSS Anchor Positioning, right? It’s a feature that allows you to link any element from the page to another one, i.e., the anchor. It’s useful for all the tooltip stuff, but it can also create a lot of other nice effects. In this article, we will study menu navigation where I rely on anchor positioning to create a nice hover effect on links. CodePen Embed Fallback Cool, right? We have a sliding effect where the blue rectangle adjusts to fit perfectly with the text content over a nice transition. If you are new to anchor positioning, this example is perfect for you because it’s simple and allows you to discover the basics of this new feature. We will also study another example so stay until the end! Note that only Chromium-based browsers fully support anchor positioning at the time I’m writing this. You’ll want to view the demos in a browser like Chrome or Edge until the feature is more widely supported in other browsers. The initial configuration Let’s start with the HTML structure which is nothing but a nav element containing an unordered list of links: <nav> <ul> <li><a href="#">Home</a></li> <li class="active"><a href="#">About</a></li> <li><a href="#">Projects</a></li> <li><a href="#">Blog</a></li> <li><a href="#">Contact</a></li> </ul> </nav> We will not spend too much time explaining this structure because it can be different if your use case is different. Simply ensure the semantic is relevant to what you are trying to do. As for the CSS part, we will start with some basic styling to create a horizontal menu navigation. ul { padding: 0; margin: 0; list-style: none; display: flex; gap: .5rem; font-size: 2.2rem; } ul li a { color: #000; text-decoration: none; font-weight: 900; line-height: 1.5; padding-inline: .2em; display: block; } Nothing fancy so far. We remove some default styling and use Flexbox to align the elements horizontally. CodePen Embed Fallback Sliding effect First off, let’s understand how the effect works. At first glance, it looks like we have one rectangle that shrinks to a small height, moves to the hovered element, and then grows to full height. That’s the visual effect, but in reality, more than one element is involved! Here is the first demo where I am using different colors to better see what is happening. CodePen Embed Fallback Each menu item has its own “element” that shrinks or grows. Then we have a common “element” (the one in red) that slides between the different menu items. The first effect is done using a background animation and the second one is where anchor positioning comes into play! The background animation We will animate the height of a CSS gradient for this first part: /* 1 */ ul li { background: conic-gradient(lightblue 0 0) bottom/100% 0% no-repeat; transition: .2s; } /* 2 */ ul li:is(:hover,.active) { background-size: 100% 100%; transition: .2s .2s; } /* 3 */ ul:has(li:hover) li.active:not(:hover) { background-size: 100% 0%; transition: .2s; } We’ve defined a gradient with a 100% width and 0% height, placed at the bottom. The gradient syntax may look strange, but it’s the shortest one that allows me to have a single-color gradient. Related: “How to correctly define a one-color gradient” Then, if the menu item is hovered or has the .active class, we make the height equal to 100%. Note the use of the delay here to make sure the growing happens after the shrinking. Finally, we need to handle a special case with the .active item. If we hover any item (that is not the active one), then the .active item gets the shirking effect (the gradient height is equal to 0%). That’s the purpose of the third selector in the code. CodePen Embed Fallback Our first animation is done! Notice how the growing begins after the shrinking completes because of the delay we defined in the second selector. The anchor positioning animation The first animation was quite easy because each item had its own background animation, meaning we didn’t have to care about the text content since the background automatically fills the whole space. We will use one element for the second animation that slides between all the menu items while adapting its width to fit the text of each item. This is where anchor positioning can help us. Let’s start with the following code: ul:before { content:""; position: absolute; position-anchor: --li; background: red; transition: .2s; } ul li:is(:hover, .active) { anchor-name: --li; } ul:has(li:hover) li.active:not(:hover) { anchor-name: none; } To avoid adding an extra element, I will prefer using a pseudo-element on the ul. It should be absolutely-positioned and we will rely on two properties to activate the anchor positioning. We define the anchor with the anchor-name property. When a menu item is hovered or has the .active class, it becomes the anchor element. We also have to remove the anchor from the .active item if another item is in a hovered state (hence, the last selector in the code). In other words, only one anchor is defined at a time. Then we use the position-anchor property to link the pseudo-element to the anchor. Notice how both use the same notation --li. It’s similar to how, for example, we define @keyframes with a specific name and later use it inside an animation property. Keep in mind that you have to use the <dashed-indent> syntax, meaning the name must always start with two dashes (--). CodePen Embed Fallback The pseudo-element is correctly placed but nothing is visible because we didn’t define any dimension! Let’s add the following code: ul:before { bottom: anchor(bottom); left: anchor(left); right: anchor(right); height: .2em; } The height property is trivial but the anchor() is a newcomer. Here’s how Juan Diego describes it in the Almanac: Let’s check the MDN page as well: Usually, we use left: 0 to place an absolute element at the left edge of its containing block (i.e., the nearest ancestor having position: relative). The left: anchor(left) will do the same but instead of the containing block, it will consider the associated anchor element. That’s all — we are done! Hover the menu items in the below demo and see how the pseudo-element slides between them. CodePen Embed Fallback Each time you hover over a menu item it becomes the new anchor for the pseudo-element (the ul:before). This also means that the anchor(...) values will change creating the sliding effect! Let’s not forget the use of the transition which is important otherwise, we will have an abrupt change. We can also write the code differently like this: ul:before { content:""; position: absolute; inset: auto anchor(right, --li) anchor(bottom, --li) anchor(left, --li); height: .2em; background: red; transition: .2s; } In other words, we can rely on the inset shorthand instead of using physical properties like left, right, and bottom, and instead of defining position-anchor, we can include the anchor’s name inside the anchor() function. We are repeating the same name three times which is probably not optimal here but in some situations, you may want your element to consider multiple anchors, and in such cases, this syntax will make sense. Combining both effects Now, we combine both effects and, tada, the illusion is perfect! CodePen Embed Fallback Pay attention to the transition values where the delay is important: ul:before { transition: .2s .2s; } ul li { transition: .2s; } ul li:is(:hover,.active) { transition: .2s .4s; } ul:has(li:hover) li.active:not(:hover) { transition: .2s; } We have a sequence of three animations — shrink the height of the gradient, slide the pseudo-element, and grow the height of the gradient — so we need to have delays between them to pull everything together. That’s why for the sliding of the pseudo-element we have a delay equal to the duration of one animation (transition: .2 .2s) and for the growing part the delay is equal to twice the duration (transition: .2s .4s). Bouncy effect? Why not?! Let’s try another fancy animation in which the highlight rectangle morphs into a small circle, jumps to the next item, and transforms back into a rectangle again! CodePen Embed Fallback I won’t explain too much for this example as it’s your homework to dissect the code! I’ll offer a few hints so you can unpack what’s happening. Like the previous effect, we have a combination of two animations. For the first one, I will use the pseudo-element of each menu item where I will adjust the dimension and the border-radius to simulate the morphing. For the second animation, I will use the ul pseudo-element to create a small circle that I move between the menu items. Here is another version of the demo with different coloration and a slower transition to better visualize each animation: CodePen Embed Fallback The tricky part is the jumping effect where I am using a strange cubic-bezier() but I have a detailed article where I explain the technique in my CSS-Tricks article “Advanced CSS Animation Using cubic-bezier()”. Conclusion I hope you enjoyed this little experimentation using the anchor positioning feature. We only looked at three properties/values but it’s enough to prepare you for this new feature. The anchor-name and position-anchor properties are the mandatory pieces for linking one element (often called a “target” element in this context) to another element (what we call an “anchor” element in this context). From there, you have the anchor() function to control the position. Related: CSS Anchor Positioning Guide Fancy Menu Navigation Using Anchor Positioning originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  19. by: Sreenath Waybar is the perfect top panel program for Wayland systems like Hyprland, Sway, etc. It offers many built-in modules and also allows the user to create custom modules to fill the panel. We have already discussed how to configure Waybar in a previous tutorial. 📋 I recommend you to go through the article first. It should make things easy to understand as you read on. In this article, let's learn some eye-candy tricks to make your Hyprland user experience even better. 0:00 /0:11 1× Hardware Groups with Waybar with group module. Grouping modules in WaybarThose who went through the wiki pages of Waybar, may have seen a module called group. Unlike other modules (memory, cpu, etc.), this group module allows you to embed more pre-built modules inside it. This is shown in the above video. So, what we are doing here is grouping related (or even unrelated, as you like) modules inside a group. Writing a sample group moduleUsually, all modules should be defined and then called in the top bar on respective places as you require. This is applicable to the group as well. Let's make one: Step 1: Start with frameworkFirst, define the group with a name and the structure: "group/<groupname>": { ----, ---- } The group module definition should be wrapped between the parentheses. For example, I am creating a group called hardware to place CPU, RAM (memory), and Temperature modules. 🚧 The modules like cpu, memory, etc., that we need to add to a group should be defined separately outside the group definition. These definitions are explained in the Waybar article. So, I will start the group definition at the end of my ~/.config/waybar/config.jsonc file: "group/hardware": { ----, ---- } 🚧 In the JSONC files, never forget to add a comma to the end of previous module (},), if it is not the last item. Step 2: Add an orientationYou already know that Waybar allows you to place the bar on the top, bottom, left, or right of the screen. This means, you can place your bar either vertically (left/right) or horizontally (top/bottom). Therefore, you may need to specify an orientation for the group items using the key orientation. "group/hardware": { "oreintation": "horizontal", } I am using a bar configured to appear at the top of the screen. Therefore, I chose “horizontal” orientation. The value for orientation can be horizontal, vertical, orthogonal, or inherit. Step 3: Add a drawer effectWith orientation set, let's make the groups a bit neat by hiding all items except one. The interesting part is, when you hover over this unhidden item, the rest of the modules inside the group will come out with a nice effect. It is like collapsing the items at once under one of the items, and then expanding. The keyword we use in the configuration here is “drawer”. "group/hardware": { "oreintation": "horizontal", "drawer": { --- }, } Inside the drawer, we can set the transition duration, motion, etc. Let's go minimal, with only setting the transition duration and transition motion. "group/hardware": { "oreintation": "horizontal" "drawer": { "transition-duration": 500, "transition-left-to-right": false }, } If we set the transition-left-to-right key to false, the first item in the list of modules (that we will add in the next section) will stay there, and the rest is expanded. Likewise, if left to default (true), the first item and the rest will all draw out. Step 4: Add the modulesIt's time to add the modules that we want to appear inside the group. "group/hardware": { "oreintation": "horizontal", "drawer": { "transition-duration": 500, "transition-left-to-right": false }, "modules": [ "custom/hardware-wrap", "cpu", "memory" "temperature" ] } Here, in the above snippet, we have created four modules to appear inside the group hardware. 📋 The first item inside the module key will be the one visible. The subsequent items will be hidden and will only appear when hovered over the first item. As said earlier, we need to define all the modules appearing inside the group as regular Waybar modules. Here, we will define a custom module called custom/hardware-wrap, just to hold a place for the Hardware section. So, outside the group module definition parenthesis, use the following code: "custom/hardware-wrap": { "format": Hardware "tooltip-format": "Hardware group" } So, when "custom/hardware-wrap" is placed inside the group module as the first item, only that will be visible, hiding the rest (cpu, memory, and temperature in this case.). Step 5: Simple CSS for the custom moduleLet's add a CSS for the custom module that we have added. Go inside the ~/.conf/waybar/style.css file and add the lines: #custom-hardware-wrap { box-shadow: none; background: #202131; text-shadow: none; padding: 0px; border-radius: 5px; margin-top: 3px; margin-bottom: 3px; margin-right: 6px; margin-left: 6px; padding-right: 4px; padding-left: 4px; color: #98C379; } Step 6: Add it to the panelNow that we have designed and styled the group, let's add it to the panel. We know that in Waybar, we have modules-left, modules-center, and modules-right to align elements in the panel. Let's place the new Hardware group to the right side of the panel. "modules-right": ["group/hardware", "pulseaudio", "tray"], In the above code inside the ~/.config/waybar/config.jsonc, you can see that, I have placed the group/hardware on the right along with PulseAudio and system tray. Hardware Group Collapsed Hardware Group Expanded Wrapping UpGrouping items is a handy trick, since it can create some space to place other items and make the top bar organized. If you are curious, you can take a look at the drawer snippet given in the group page of Waybar wiki. You can explore some more customizations like adding a power menu button.

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.