Jump to content

Welcome to CodeNameJessica

✨ Welcome to CodeNameJessica! ✨

πŸ’» Where tech meets community.

Hello, Guest! πŸ‘‹
You're just a few clicks away from joining an exclusive space for tech enthusiasts, problem-solvers, and lifelong learners like you.

πŸ” Why Join?
By becoming a member of CodeNameJessica, you’ll get access to:
βœ… In-depth discussions on Linux, Security, Server Administration, Programming, and more
βœ… Exclusive resources, tools, and scripts for IT professionals
βœ… A supportive community of like-minded individuals to share ideas, solve problems, and learn together
βœ… Project showcases, guides, and tutorials from our members
βœ… Personalized profiles and direct messaging to collaborate with other techies

🌐 Sign Up Now and Unlock Full Access!
As a guest, you're seeing just a glimpse of what we offer. Don't miss out on the complete experience! Create a free account today and start exploring everything CodeNameJessica has to offer.

Blogs

Featured Entries

Our community blogs

  1. Linux News

    A blog by Blogger in CodeName Blogs
    • 29 Entries
    • 0 Comments
    • 979 Views
    By: Linux.com Editorial Staff
    Mon, 10 Mar 2025 15:30:39 +0000


    Join us for a Complimentary Live Webinar Sponsored by Linux Foundation Education and Arm Education

    March 19, 2025 | 08:00 AM PDT (UTC-7)

    You won’t believe how fast this is! Join us for an insightful webinar on leveraging CPUs for machine learning inference using the recently released, open source KleidiAI library. Discover how KleidiAI’s optimized micro-kernels are already being adopted by popular ML frameworks like PyTorch, enabling developers to achieve amazing inference performance without GPU acceleration. We’ll discuss the key optimizations available in KleidiAI, review real-world use cases, and demonstrate how to get started with ease in a fireside chat format, ensuring you stay ahead in the ML space and harness the full potential of CPUs already in consumer hands. This Linux Foundation Education webinar is supported under the Semiconductor Education Alliance and sponsored by Arm.

    Register Now

    The post Learn how easy it is to leverage CPUs for machine learning with our free webinar appeared first on Linux.com.

    Recent Entries

  2. F.O.S.S

    A blog by Blogger in CodeName Blogs
    • 59 Entries
    • 0 Comments
    • 2155 Views
    Blogger
    by: Sreenath
    Mon, 07 Apr 2025 16:18:54 GMT


    Installing Logseq Knowledge Management Tool on Linux

    Logseq is a versatile open source tool for knowledge management. It is regarded as one of the best open source alternatives to the popular proprietary tool Obsidian.

    While it covers the basics of note-taking, it also doubles down as a powerful task manager and journaling tool.

    Installing Logseq Knowledge Management Tool on Linux
    Logseq Desktop

    What sets Logseq apart from traditional note-taking apps is its unique organization system, which forgoes hierarchical folder structures in favor of interconnected, block-based notes. This makes it an excellent choice for users seeking granular control and flexibility over their information.

    In this article, we’ll explore how to install Logseq on Linux distributions.

    Use the official AppImage

    For Linux systems, Logseq officially provides an AppImage. You can head over to the downloads page and grab the AppImage file.

    It is advised to use tools like AppImageLauncher (hasn't seen a new release for a while, but it is active) or GearLever to create a desktop integration for Logseq.

    Fret not, if you would rather not use a third-party tool, you can do it yourself as well.

    First, create a folder in your home directory to store all the AppImages. Next, move the Logseq AppImage to this location and give the file execution permission.

    Installing Logseq Knowledge Management Tool on Linux
    Go to AppImage properties

    Right-click on the AppImage file and go to the file properties. Here, in the Permissions tab, select "Allow Executing as a Program" or "Executable as Program" depending on the distro, but it has the same meaning.

    Here's how it looks on a distribution with GNOME desktop:

    Installing Logseq Knowledge Management Tool on Linux
    Toggle Execution permission

    Once done, you can double-click to open Logseq app.

    🚧
    If you are using Ubuntu 24.04 and above, you won't be able to open the AppImage of Logseq due to a change in the apparmour policy. You can either use other sources like Flatpak or take a look at a less secure alternative.

    Alternatively, use the 'semi-official' Flatpak

    Logseq has a Flatpak version available. This is not an official offering from the Logseq team, but is provided by a developer who also contributes to Logseq.

    First, make sure your system has Flatpak support. If not, enable Flatpak support and add Flathub repository by following our guide:

    Using Flatpak on Linux [Complete Guide]
    Learn all the essentials for managing Flatpak packages in this beginner’s guide.
    Installing Logseq Knowledge Management Tool on Linux

    Now, install Logseq either from a Flatpak supported software center like GNOME Software:

    Installing Logseq Knowledge Management Tool on Linux
    Install Logseq from GNOME Software

    Or install it using the terminal with the following command:

    flatpak install flathub com.logseq.Logseq

    Other methods

    For Ubuntu users and those who have Snap setup, there is an unofficial Logseq client in the Snap store. You can go with that if you prefer.

    There are also packages available in the AUR for Logseq desktop clients. Arch Linux users can take a look at these packages and get it installed via the terminal using Pamac package manager.

    Post Installation

    Once you have installed Logseq, open it. This will bring you to the temporary journal page.

    You need to open a local folder for Logseq to start your work to avoid potential data loss. For this, click on the "Add a graph" button on the top-right, as shown in the screenshot below.

    Installing Logseq Knowledge Management Tool on Linux
    Click on "Add a graph"

    On the resulting page, click on "Choose a folder" button.

    Installing Logseq Knowledge Management Tool on Linux
    Click "Choose a folder"

    From the file chooser, either create a new directory or select an existing directory and click "Open".

    Installing Logseq Knowledge Management Tool on Linux
    Select a location

    That's it. You can start using Logseq now. And I'll help you with that. I'll be sharing regular tutorials on using Logseq for the next few days/weeks here. Stay tuned.

    Recent Entries

  3. Jessica Brown

    • 46 Entries
    • 0 Comments
    • 3431 Views
    Overview

    On Thursday, February 6, 2025, multiple Cloudflare services, including R2 object storage, experienced a significant outage lasting 59 minutes. This incident resulted in complete operational failures against R2 and disruptions to dependent services such as Stream, Images, Cache Reserve, Vectorize, and Log Delivery. The root cause was traced to human error and inadequate validation safeguards during routine abuse remediation procedures.

    Impact Summary
    • Incident Duration: 08:14 UTC to 09:13 UTC (primary impact), with residual effects until 09:36 UTC.

    • Primary Issue: Disabling of the R2 Gateway service, responsible for the R2 API.

    • Data Integrity: No data loss or corruption occurred within R2.

    Affected Services
    1. R2: 100% failure of operations (uploads, downloads, metadata) during the outage. Minor residual errors (<1%) post-recovery.

    2. Stream: Complete service disruption during the outage.

    3. Images: Full impact on upload/download; delivery minimally affected (97% success rate).

    4. Cache Reserve: Increased origin requests, impacting <0.049% of cacheable requests.

    5. Log Delivery: Delays and data loss (up to 4.5% for non-R2, 13.6% for R2 jobs).

    6. Durable Objects: 0.09% error rate spike post-recovery.

    7. Cache Purge: 1.8% error rate increase, 10x latency during the incident.

    8. Vectorize: 75% query failures, 100% insert/upsert/delete failures during the outage.

    9. Key Transparency Auditor: Complete failure of publish/read operations.

    10. Workers & Pages: Minimal deployment failures (0.002%) for projects with R2 bindings.

    Incident Timeline
    • 08:12 UTC: R2 Gateway service inadvertently disabled.

    • 08:14 UTC: Service degradation begins.

    • 08:25 UTC: Internal incident declared.

    • 08:42 UTC: Root cause identified.

    • 08:57 UTC: Operations team begins re-enabling the R2 Gateway.

    • 09:10 UTC: R2 starts to recover.

    • 09:13 UTC: Primary impact ends.

    • 09:36 UTC: Residual error rates recover.

    • 10:29 UTC: Incident officially closed after monitoring.

    Root Cause Analysis

    The incident stemmed from human error during a phishing site abuse report remediation. Instead of targeting a specific endpoint, actions mistakenly disabled the entire R2 Gateway service. Contributing factors included:

    • Lack of system-level safeguards.

    • Inadequate account tagging and validation.

    • Limited operator training on critical service disablement risks.

    The Risks of CDN Dependencies in Critical Systems

    Content Delivery Networks (CDNs) play a vital role in improving website performance, scalability, and security. However, relying heavily on CDNs for critical systems can introduce significant risks when outages occur:

    • Lost Revenue: Downtime on e-commerce platforms or SaaS services can result in immediate lost sales and financial transactions, directly affecting revenue streams.

    • Lost Data: Although R2 did not suffer data loss in this incident, disruptions in data transmission processes can lead to lost or incomplete data, especially in logging and analytics services.

    • Lost Customers: Extended or repeated outages can erode customer trust and satisfaction, leading to churn and damage to brand reputation.

    • Operational Disruptions: Businesses relying on real-time data processing or automated workflows may face cascading failures when critical CDN services are unavailable.

    Remediation Steps

    Immediate Actions:

    • Deployment of additional guardrails in the Admin API.

    • Disabling high-risk manual actions in the abuse review UI.

    In-Progress Measures:

    • Improved internal account provisioning.

    • Restricting product disablement permissions.

    • Implementing two-party approval for critical actions.

    • Enhancing abuse checks to prevent internal service disruptions.

    Cloudflare acknowledges the severity of this incident and the disruption it caused to customers. We are committed to strengthening our systems, implementing robust safeguards, and ensuring that similar incidents are prevented in the future.

    For more information about Cloudflare's services or to explore career opportunities, visit our website.

    Recent Entries

  4. Programmer's Corner

    A blog by Blogger in CodeName Blogs
    • 133 Entries
    • 0 Comments
    • 875 Views
    Blogger
    by: Geoff Graham
    Wed, 09 Apr 2025 13:00:24 +0000


    The CSS Overflow Module Level 5 specification defines a couple of new features that are designed for creating carousel UI patterns:

    • Scroll Buttons: Buttons that the browser provides, as in literal <button> elements, that scroll the carousel content 85% of the area when clicked.
    • Scroll Markers: The little dots that act as anchored links, as in literal <a> elements that scroll to a specific carousel item when clicked.

    Chrome has prototyped these features and released them in Chrome 135. Adam Argyle has a wonderful explainer over at the Chrome Developer blog. Kevin Powell has an equally wonderful video where he follows the explainer. This post is me taking notes from them.

    First, some markup:

    <ul class="carousel">
      <li>...</li>
      <li>...</li>
      <li>...</li>
      <li>...</li>
      <li>...</li>
    </ul>

    First, let’s set these up in a CSS auto grid that displays the list items in a single line:

    .carousel {
      display: grid;
      grid-auto-flow: column;
    }

    We can tailor this so that each list item takes up a specific amount of space, say 40%, and insert a gap between them:

    .carousel {
      display: grid;
      grid-auto-flow: column;
      grid-auto-columns: 40%;
      gap: 2rem;
    }

    This gives us a nice scrolling area to advance through the list items by moving left and right. We can use CSS Scroll Snapping to ensure that scrolling stops on each item in the center rather than scrolling right past them.

    .carousel {
      display: grid;
      grid-auto-flow: column;
      grid-auto-columns: 40%;
      gap: 2rem;
    
      scroll-snap-type: x mandatory;
    
      > li {
        scroll-snap-align: center;
      }
    }

    Kevin adds a little more flourish to the .carousel so that it is easier to see what’s going on. Specifically, he adds a border to the entire thing as well as padding for internal spacing.

    So far, what we have is a super simple slider of sorts where we can either scroll through items horizontally or click the left and right arrows in the scroller.

    We can add scroll buttons to the mix. We get two buttons, one to navigate one direction and one to navigate the other direction, which in this case is left and right, respectively. As you might expect, we get two new pseudo-elements for enabling and styling those buttons:

    • ::scroll-button(left)
    • ::scroll-button(right)

    Interestingly enough, if you crack open DevTools and inspect the scroll buttons, they are actually exposed with logical terms instead, ::scroll-button(inline-start) and ::scroll-button(inline-end).

    DevTools with an arrow pointing at the two scroll buttons in the HTML showing them with logical naming.

    And both of those support the CSS content property, which we use to insert a label into the buttons. Let’s keep things simple and stick with “Left” and “Right” as our labels for now:

    .carousel::scroll-button(left) {
      content: "Left";
    }
    .carousel::scroll-button(right) {
      content: "Right";
    }

    Now we have two buttons above the carousel. Clicking them either advances the carousel left or right by 85%. Why 85%? I don’t know. And neither does Kevin. That’s just what it says in the specification. I’m sure there’s a good reason for it and we’ll get more light shed on it at some point.

    But clicking the buttons in this specific example will advance the scroll only one list item at a time because we’ve set scroll snapping on it to stop at each item. So, even though the buttons want to advance by 85% of the scrolling area, we’re telling it to stop at each item.

    Remember, this is only supported in Chrome at the time of writing:

    We can select both buttons together in CSS, like this:

    .carousel::scroll-button(left),
    .carousel::scroll-button(right) {
      /* Styles */
    }

    Or we can use the Universal Selector:

    .carousel::scroll-button(*) {
      /* Styles */
    }

    And we can even use newer CSS Anchor Positioning to set the left button on the carousel’s left side and the right button on the carousel’s right side:

    .carousel {
      /* ... */
      anchor-name: --carousel; /* define the anchor */
    }
    
    .carousel::scroll-button(*) {
      position: fixed; /* set containment on the target */
      position-anchor: --carousel; /* set the anchor */
    }
    
    .carousel::scroll-button(left) {
      content: "Left";
      position-area: center left;
    }
    .carousel::scroll-button(right) {
      content: "Right";
      position-area: center right;
    }

    Notice what happens when navigating all the way to the left or right of the carousel. The buttons are disabled, indicating that you have reached the end of the scrolling area. Super neat! That’s something that is normally in JavaScript territory, but we’re getting it for free.

    Let’s work on the scroll markers, or those little dots that sit below the carousel’s content. Each one is an <a> element anchored to a specific list item in the carousel so that, when clicked, you get scrolled directly to that item.

    We get a new pseudo-element for the entire group of markers called ::scroll-marker-group that we can use to style and position the container. In this case, let’s set Flexbox on the group so that we can display them on a single line and place gaps between them in the center of the carousel’s inline size:

    .carousel::scroll-marker-group {
      display: flex;
      justify-content: center;
      gap: 1rem;
    }

    We also get a new scroll-marker-group property that lets us position the group either above (before) the carousel or below (after) it:

    .carousel {
      /* ... */
      scroll-marker-group: after; /* displayed below the content */
    }

    We can style the markers themselves with the new ::scroll-marker pseudo-element:

    .carousel {
      /* ... */
    
      > li::scroll-marker {
        content: "";
        aspect-ratio: 1;
        border: 2px solid CanvasText;
        border-radius: 100%;
        width: 20px;
      }
    }

    When clicking on a marker, it becomes the “active” item of the bunch, and we get to select and style it with the :target-current pseudo-class:

    li::scroll-marker:target-current {
      background: CanvasText;
    }

    Take a moment to click around the markers. Then take a moment using your keyboard and appreciate that we can all of the benefits of focus states as well as the ability to cycle through the carousel items when reaching the end of the markers. It’s amazing what we’re getting for free in terms of user experience and accessibility.

    We can further style the markers when they are hovered or in focus:

    li::scroll-marker:hover,
    li::scroll-marker:focus-visible {
      background: LinkText;
    }

    And we can “animate” the scrolling effect by setting scroll-behavior: smooth on the scroll snapping. Adam smartly applies it when the user’s motion preferences allow it:

    .carousel {
      /* ... */
    
      @media (prefers-reduced-motion: no-preference) {
        scroll-behavior: smooth;
      }
    }

    Buuuuut that seems to break scroll snapping a bit because the scroll buttons are attempting to slide things over by 85% of the scrolling space. Kevin had to fiddle with his grid-auto-columns sizing to get things just right, but showed how Adam’s example took a different sizing approach. It’s a matter of fussing with things to get them just right.


    This is just a super early look at CSS Carousels. Remember that this is only supported in Chrome 135+ at the time I’m writing this, and it’s purely experimental. So, play around with it, get familiar with the concepts, and then be open-minded to changes in the future as the CSS Overflow Level 5 specification is updated and other browsers begin building support.


    CSS Carousels originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

    Recent Entries

  5. Linux Tips

    A blog by Blogger in CodeName Blogs
    • 52 Entries
    • 0 Comments
    • 637 Views
    Blogger
    by: Umair Khurshid
    Tue, 08 Apr 2025 12:11:49 +0530


    Port management in Docker and Docker Compose is essential to properly expose containerized services to the outside world, both in development and production environments.

    Understanding how port mapping works helps avoid conflicts, ensures security, and improves network configuration.

    This tutorial will walk you understand how to configure and map ports effectively in Docker and Docker Compose.

    What is port mapping in Docker?

    Port mapping exposes network services running inside a container to the host, to other containers on the same host, or to other hosts and network devices. It allows you to map a specific port from the host system to a port on the container, making the service accessible from outside the container.

    In the schematic below, there are two separate services running in two containers and both use port 80. Now, their ports are mapped with hosts port 8080 and 8090 and thus they are accessible from outside using these two ports.

    Docker port mapping example

    How to map ports in Docker

    Typically, a running container has its own isolated network namespace with its own IP address. By default, containers can communicate with each other and with the host system, but external network access is not automatically enabled.

    Port mapping is used to create communication between the container's isolated network and the host system's network.

    For example, let's map Nginx to port 80:

    docker run -d --publish 8080:80 nginx

    The --publish command (usually shortened to -p) is what allows us to create that association between the local port (8080) and the port of interest to us in the container (80).

    In this case, to access it, you simply use a web browser and access http://localhost:8080

    On the other hand, if the image you are using to create the container has made good use of the EXPOSE instructions, you can use the command in this other way:

    docker run -d --publish-all hello-world

    Docker takes care of choosing a random port (instead of the port 80 or other specified ports) on your machine to map with those specified in the Dockerfile:

    Mapping ports with Docker Compose

    Docker Compose allows you to define container configurations in a docker-compose.yml. To map ports, you use the ports YAML directive.

    version: '3.8'
    services:
      web:
        image: nginx:latest
        ports:
          - "8080:80"
    

    In this example, as in the previous case, the Nginx container will expose port 80 on the host's port 8080.

    Port mapping vs. exposing

    It is important not to confuse the use of portswith expose directives. The former creates a true port forwarding to the outside. The latter only serves to document that an internal port is being used by the container, but does not create any exposure to the host.

    services:
      app:
        image: myapp
        expose:
          - "3000"
    

    In this example, port 3000 will only be accessible from other containers in the same Docker network, but not from outside.

    Mapping Multiple Ports

    You just saw how to map a single port, but Docker also allows you to map more than one port at a time. This is useful when your container needs to expose multiple services on different ports.

    Let's configure a nginx server to work in a dual stack environment:

    docker run -p 8080:80 -p 443:443 nginx
    

    Now the server to listen for both HTTP traffic on port 8080, mapped to port 80 inside the container and HTTPS traffic on port 443, mapped to port 443 inside the container.

    Specifying host IP address for port binding

    By default, Docker binds container ports to all available IP addresses on the host machine. If you need to bind a port to a specific IP address on the host, you can specify that IP in the command. This is useful when you have multiple network interfaces or want to restrict access to specific IPs.

    docker run -p 192.168.1.100:8080:80 nginx
    

    This command binds port 8080 on the specific IP address 192.168.1.100 to port 80 inside the container.

    Port range mapping

    Sometimes, you may need to map a range of ports instead of a single port. Docker allows this by specifying a range for both the host and container ports. For example,

    docker run -p 5000-5100:5000-5100 nginx
    

    This command maps a range of ports from 5000 to 5100 on the host to the same range inside the container. This is particularly useful when running services that need multiple ports, like a cluster of servers or applications with several endpoints.

    Using different ports for host and container

    In situations where you need to avoid conflicts, security concerns, or manage different environments, you may want to map different port numbers between the host machine and the container. This can be useful if the container uses a default port, but you want to expose it on a different port on the host to avoid conflicts.

    docker run -p 8081:80 nginx
    

    This command maps port 8081 on the host to port 80 inside the container. Here, the container is still running its web server on port 80, but it is exposed on port 8081 on the host machine.

    Binding to UDP ports (if you need that)

    By default, Docker maps TCP ports. However, you can also map UDP ports if your application uses UDP. This is common for protocols and applications that require low latency, real-time communication, or broadcast-based communication.

    For example, DNS uses UDP for query and response communication due to its speed and low overhead. If you are running a DNS server inside a Docker container, you would need to map UDP ports.

    docker run -p 53:53/udp ubuntu/bind9
    

    Here this command maps UDP port 53 on the host to UDP port 53 inside the container.

    Inspecting and verifying port mapping

    Once you have set up port mapping, you may want to verify that it’s working as expected. Docker provides several tools for inspecting and troubleshooting port mappings.

    To list all active containers and see their port mappings, use the docker ps command. The output includes a PORTS column that shows the mapping between the host and container ports.

    docker ps
    

    This might output something like:

    inspecting and verifying port mapping in Docker

    If you more detailed information about a container's port mappings, you can use docker inspect. This command gives you a JSON output with detailed information about the container's configuration.

    docker inspect <container_id> | grep "Host"
    

    This command will display the port mappings, such as:

    Wrapping Up

    Learn Docker: Complete Beginner’s Course
    Learn Docker, an important skill to have for any DevOps and modern sysadmin. Learn all the essentials of Docker in this series.

    When you are first learning Docker, one of the more tricky topics is often networking and port mapping. I hope this rundown has helped clarify how port mapping works and how you can effectively use it to connect your containers to the outside world and orchestrate services across different environments.

    Recent Entries

  6. Opinion Blogs

    • 0 Entries
    • 0 Comments
    • 73 Views

    No blog entries yet

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.