
- 0 Comments
- 231 views
π» Where tech meets community.
Hello, Guest! π
You're just a few clicks away from joining an exclusive space for tech enthusiasts, problem-solvers, and lifelong learners like you.
π Why Join?
By becoming a member of CodeNameJessica, youβll get access to:
β
In-depth discussions on Linux, Security, Server Administration, Programming, and more
β
Exclusive resources, tools, and scripts for IT professionals
β
A supportive community of like-minded individuals to share ideas, solve problems, and learn together
β
Project showcases, guides, and tutorials from our members
β
Personalized profiles and direct messaging to collaborate with other techies
π Sign Up Now and Unlock Full Access!
As a guest, you're seeing just a glimpse of what we offer. Don't miss out on the complete experience! Create a free account today and start exploring everything CodeNameJessica has to offer.
March 19, 2025 | 08:00 AM PDT (UTC-7)
You wonβt believe how fast this is! Join us for an insightful webinar on leveraging CPUs for machine learning inference using the recently released, open source KleidiAI library. Discover how KleidiAI’s optimized micro-kernels are already being adopted by popular ML frameworks like PyTorch, enabling developers to achieve amazing inference performance without GPU acceleration. We’ll discuss the key optimizations available in KleidiAI, review real-world use cases, and demonstrate how to get started with ease in a fireside chat format, ensuring you stay ahead in the ML space and harness the full potential of CPUs already in consumer hands. This Linux Foundation Education webinar is supported under the Semiconductor Education Alliance and sponsored by Arm.
The post Learn how easy it is to leverage CPUs for machine learning with our free webinar appeared first on Linux.com.
Latest entry by Blogger,
Logseq is a versatile open source tool for knowledge management. It is regarded as one of the best open source alternatives to the popular proprietary tool Obsidian.
While it covers the basics of note-taking, it also doubles down as a powerful task manager and journaling tool.
What sets Logseq apart from traditional note-taking apps is its unique organization system, which forgoes hierarchical folder structures in favor of interconnected, block-based notes. This makes it an excellent choice for users seeking granular control and flexibility over their information.
In this article, we’ll explore how to install Logseq on Linux distributions.
For Linux systems, Logseq officially provides an AppImage. You can head over to the downloads page and grab the AppImage file.
It is advised to use tools like AppImageLauncher (hasn't seen a new release for a while, but it is active) or GearLever to create a desktop integration for Logseq.
Fret not, if you would rather not use a third-party tool, you can do it yourself as well.
First, create a folder in your home directory to store all the AppImages. Next, move the Logseq AppImage to this location and give the file execution permission.
Right-click on the AppImage file and go to the file properties. Here, in the Permissions tab, select "Allow Executing as a Program" or "Executable as Program" depending on the distro, but it has the same meaning.
Here's how it looks on a distribution with GNOME desktop:
Once done, you can double-click to open Logseq app.
Logseq has a Flatpak version available. This is not an official offering from the Logseq team, but is provided by a developer who also contributes to Logseq.
First, make sure your system has Flatpak support. If not, enable Flatpak support and add Flathub repository by following our guide:
Now, install Logseq either from a Flatpak supported software center like GNOME Software:
Or install it using the terminal with the following command:
flatpak install flathub com.logseq.Logseq
For Ubuntu users and those who have Snap setup, there is an unofficial Logseq client in the Snap store. You can go with that if you prefer.
There are also packages available in the AUR for Logseq desktop clients. Arch Linux users can take a look at these packages and get it installed via the terminal using Pamac package manager.
Once you have installed Logseq, open it. This will bring you to the temporary journal page.
You need to open a local folder for Logseq to start your work to avoid potential data loss. For this, click on the "Add a graph" button on the top-right, as shown in the screenshot below.
On the resulting page, click on "Choose a folder" button.
From the file chooser, either create a new directory or select an existing directory and click "Open".
That's it. You can start using Logseq now. And I'll help you with that. I'll be sharing regular tutorials on using Logseq for the next few days/weeks here. Stay tuned.
Latest entry by Jessica Brown,
On Thursday, February 6, 2025, multiple Cloudflare services, including R2 object storage, experienced a significant outage lasting 59 minutes. This incident resulted in complete operational failures against R2 and disruptions to dependent services such as Stream, Images, Cache Reserve, Vectorize, and Log Delivery. The root cause was traced to human error and inadequate validation safeguards during routine abuse remediation procedures.
Incident Duration: 08:14 UTC to 09:13 UTC (primary impact), with residual effects until 09:36 UTC.
Primary Issue: Disabling of the R2 Gateway service, responsible for the R2 API.
Data Integrity: No data loss or corruption occurred within R2.
R2: 100% failure of operations (uploads, downloads, metadata) during the outage. Minor residual errors (<1%) post-recovery.
Stream: Complete service disruption during the outage.
Images: Full impact on upload/download; delivery minimally affected (97% success rate).
Cache Reserve: Increased origin requests, impacting <0.049% of cacheable requests.
Log Delivery: Delays and data loss (up to 4.5% for non-R2, 13.6% for R2 jobs).
Durable Objects: 0.09% error rate spike post-recovery.
Cache Purge: 1.8% error rate increase, 10x latency during the incident.
Vectorize: 75% query failures, 100% insert/upsert/delete failures during the outage.
Key Transparency Auditor: Complete failure of publish/read operations.
Workers & Pages: Minimal deployment failures (0.002%) for projects with R2 bindings.
08:12 UTC: R2 Gateway service inadvertently disabled.
08:14 UTC: Service degradation begins.
08:25 UTC: Internal incident declared.
08:42 UTC: Root cause identified.
08:57 UTC: Operations team begins re-enabling the R2 Gateway.
09:10 UTC: R2 starts to recover.
09:13 UTC: Primary impact ends.
09:36 UTC: Residual error rates recover.
10:29 UTC: Incident officially closed after monitoring.
The incident stemmed from human error during a phishing site abuse report remediation. Instead of targeting a specific endpoint, actions mistakenly disabled the entire R2 Gateway service. Contributing factors included:
Lack of system-level safeguards.
Inadequate account tagging and validation.
Limited operator training on critical service disablement risks.
Content Delivery Networks (CDNs) play a vital role in improving website performance, scalability, and security. However, relying heavily on CDNs for critical systems can introduce significant risks when outages occur:
Lost Revenue: Downtime on e-commerce platforms or SaaS services can result in immediate lost sales and financial transactions, directly affecting revenue streams.
Lost Data: Although R2 did not suffer data loss in this incident, disruptions in data transmission processes can lead to lost or incomplete data, especially in logging and analytics services.
Lost Customers: Extended or repeated outages can erode customer trust and satisfaction, leading to churn and damage to brand reputation.
Operational Disruptions: Businesses relying on real-time data processing or automated workflows may face cascading failures when critical CDN services are unavailable.
Immediate Actions:
Deployment of additional guardrails in the Admin API.
Disabling high-risk manual actions in the abuse review UI.
In-Progress Measures:
Improved internal account provisioning.
Restricting product disablement permissions.
Implementing two-party approval for critical actions.
Enhancing abuse checks to prevent internal service disruptions.
Cloudflare acknowledges the severity of this incident and the disruption it caused to customers. We are committed to strengthening our systems, implementing robust safeguards, and ensuring that similar incidents are prevented in the future.
For more information about Cloudflare's services or to explore career opportunities, visit our website.
Latest entry by Blogger,
The CSS Overflow Module Level 5 specification defines a couple of new features that are designed for creating carousel UI patterns:
<button>
elements, that scroll the carousel content 85% of the area when clicked.<a>
elements that scroll to a specific carousel item when clicked.Chrome has prototyped these features and released them in Chrome 135. Adam Argyle has a wonderful explainer over at the Chrome Developer blog. Kevin Powell has an equally wonderful video where he follows the explainer. This post is me taking notes from them.
First, some markup:
<ul class="carousel">
<li>...</li>
<li>...</li>
<li>...</li>
<li>...</li>
<li>...</li>
</ul>
First, let’s set these up in a CSS auto grid that displays the list items in a single line:
.carousel {
display: grid;
grid-auto-flow: column;
}
We can tailor this so that each list item takes up a specific amount of space, say 40%
, and insert a gap
between them:
.carousel {
display: grid;
grid-auto-flow: column;
grid-auto-columns: 40%;
gap: 2rem;
}
This gives us a nice scrolling area to advance through the list items by moving left and right. We can use CSS Scroll Snapping to ensure that scrolling stops on each item in the center rather than scrolling right past them.
.carousel {
display: grid;
grid-auto-flow: column;
grid-auto-columns: 40%;
gap: 2rem;
scroll-snap-type: x mandatory;
> li {
scroll-snap-align: center;
}
}
Kevin adds a little more flourish to the .carousel
so that it is easier to see what’s going on. Specifically, he adds a border
to the entire thing as well as padding
for internal spacing.
So far, what we have is a super simple slider of sorts where we can either scroll through items horizontally or click the left and right arrows in the scroller.
We can add scroll buttons to the mix. We get two buttons, one to navigate one direction and one to navigate the other direction, which in this case is left and right, respectively. As you might expect, we get two new pseudo-elements for enabling and styling those buttons:
::scroll-button(left)
::scroll-button(right)
Interestingly enough, if you crack open DevTools and inspect the scroll buttons, they are actually exposed with logical terms instead, ::scroll-button(inline-start)
and ::scroll-button(inline-end)
.
And both of those support the CSS content
property, which we use to insert a label into the buttons. Let’s keep things simple and stick with “Left” and “Right” as our labels for now:
.carousel::scroll-button(left) {
content: "Left";
}
.carousel::scroll-button(right) {
content: "Right";
}
Now we have two buttons above the carousel. Clicking them either advances the carousel left or right by 85%. Why 85%? I don’t know. And neither does Kevin. That’s just what it says in the specification. I’m sure there’s a good reason for it and we’ll get more light shed on it at some point.
But clicking the buttons in this specific example will advance the scroll only one list item at a time because we’ve set scroll snapping on it to stop at each item. So, even though the buttons want to advance by 85% of the scrolling area, we’re telling it to stop at each item.
Remember, this is only supported in Chrome at the time of writing:
We can select both buttons together in CSS, like this:
.carousel::scroll-button(left),
.carousel::scroll-button(right) {
/* Styles */
}
Or we can use the Universal Selector:
.carousel::scroll-button(*) {
/* Styles */
}
And we can even use newer CSS Anchor Positioning to set the left button on the carousel’s left side and the right button on the carousel’s right side:
.carousel {
/* ... */
anchor-name: --carousel; /* define the anchor */
}
.carousel::scroll-button(*) {
position: fixed; /* set containment on the target */
position-anchor: --carousel; /* set the anchor */
}
.carousel::scroll-button(left) {
content: "Left";
position-area: center left;
}
.carousel::scroll-button(right) {
content: "Right";
position-area: center right;
}
Notice what happens when navigating all the way to the left or right of the carousel. The buttons are disabled, indicating that you have reached the end of the scrolling area. Super neat! That’s something that is normally in JavaScript territory, but we’re getting it for free.
Let’s work on the scroll markers, or those little dots that sit below the carousel’s content. Each one is an <a>
element anchored to a specific list item in the carousel so that, when clicked, you get scrolled directly to that item.
We get a new pseudo-element for the entire group of markers called ::scroll-marker-group
that we can use to style and position the container. In this case, let’s set Flexbox on the group so that we can display them on a single line and place gaps between them in the center of the carousel’s inline size:
.carousel::scroll-marker-group {
display: flex;
justify-content: center;
gap: 1rem;
}
We also get a new scroll-marker-group
property that lets us position the group either above (before
) the carousel or below (after
) it:
.carousel {
/* ... */
scroll-marker-group: after; /* displayed below the content */
}
We can style the markers themselves with the new ::scroll-marker
pseudo-element:
.carousel {
/* ... */
> li::scroll-marker {
content: "";
aspect-ratio: 1;
border: 2px solid CanvasText;
border-radius: 100%;
width: 20px;
}
}
When clicking on a marker, it becomes the “active” item of the bunch, and we get to select and style it with the :target-current
pseudo-class:
li::scroll-marker:target-current {
background: CanvasText;
}
Take a moment to click around the markers. Then take a moment using your keyboard and appreciate that we can all of the benefits of focus states as well as the ability to cycle through the carousel items when reaching the end of the markers. It’s amazing what we’re getting for free in terms of user experience and accessibility.
We can further style the markers when they are hovered or in focus:
li::scroll-marker:hover,
li::scroll-marker:focus-visible {
background: LinkText;
}
And we can “animate” the scrolling effect by setting scroll-behavior: smooth
on the scroll snapping. Adam smartly applies it when the user’s motion preferences allow it:
.carousel {
/* ... */
@media (prefers-reduced-motion: no-preference) {
scroll-behavior: smooth;
}
}
Buuuuut that seems to break scroll snapping a bit because the scroll buttons are attempting to slide things over by 85% of the scrolling space. Kevin had to fiddle with his grid-auto-columns
sizing to get things just right, but showed how Adam’s example took a different sizing approach. It’s a matter of fussing with things to get them just right.
This is just a super early look at CSS Carousels. Remember that this is only supported in Chrome 135+ at the time I’m writing this, and it’s purely experimental. So, play around with it, get familiar with the concepts, and then be open-minded to changes in the future as the CSS Overflow Level 5 specification is updated and other browsers begin building support.
CSS Carousels originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Latest entry by Blogger,
Port management in Docker and Docker Compose is essential to properly expose containerized services to the outside world, both in development and production environments.
Understanding how port mapping works helps avoid conflicts, ensures security, and improves network configuration.
This tutorial will walk you understand how to configure and map ports effectively in Docker and Docker Compose.
Port mapping exposes network services running inside a container to the host, to other containers on the same host, or to other hosts and network devices. It allows you to map a specific port from the host system to a port on the container, making the service accessible from outside the container.
In the schematic below, there are two separate services running in two containers and both use port 80. Now, their ports are mapped with hosts port 8080 and 8090 and thus they are accessible from outside using these two ports.
Typically, a running container has its own isolated network namespace with its own IP address. By default, containers can communicate with each other and with the host system, but external network access is not automatically enabled.
Port mapping is used to create communication between the container's isolated network and the host system's network.
For example, let's map Nginx to port 80:
docker run -d --publish 8080:80 nginx
The --publish
command (usually shortened to -p) is what allows us to create that association between the local port (8080) and the port of interest to us in the container (80).
In this case, to access it, you simply use a web browser and access http://localhost:8080
On the other hand, if the image you are using to create the container has made good use of the EXPOSE instructions, you can use the command in this other way:
docker run -d --publish-all hello-world
Docker takes care of choosing a random port (instead of the port 80 or other specified ports) on your machine to map with those specified in the Dockerfile:
Docker Compose allows you to define container configurations in a docker-compose.yml
. To map ports, you use the ports
YAML directive.
version: '3.8'
services:
web:
image: nginx:latest
ports:
- "8080:80"
In this example, as in the previous case, the Nginx container will expose port 80 on the host's port 8080.
It is important not to confuse the use of ports
with expose
directives. The former creates a true port forwarding to the outside. The latter only serves to document that an internal port is being used by the container, but does not create any exposure to the host.
services:
app:
image: myapp
expose:
- "3000"
In this example, port 3000 will only be accessible from other containers in the same Docker network, but not from outside.
You just saw how to map a single port, but Docker also allows you to map more than one port at a time. This is useful when your container needs to expose multiple services on different ports.
Let's configure a nginx server to work in a dual stack environment:
docker run -p 8080:80 -p 443:443 nginx
Now the server to listen for both HTTP traffic on port 8080, mapped to port 80 inside the container and HTTPS traffic on port 443, mapped to port 443 inside the container.
By default, Docker binds container ports to all available IP addresses on the host machine. If you need to bind a port to a specific IP address on the host, you can specify that IP in the command. This is useful when you have multiple network interfaces or want to restrict access to specific IPs.
docker run -p 192.168.1.100:8080:80 nginx
This command binds port 8080 on the specific IP address 192.168.1.100
to port 80 inside the container.
Sometimes, you may need to map a range of ports instead of a single port. Docker allows this by specifying a range for both the host and container ports. For example,
docker run -p 5000-5100:5000-5100 nginx
This command maps a range of ports from 5000 to 5100 on the host to the same range inside the container. This is particularly useful when running services that need multiple ports, like a cluster of servers or applications with several endpoints.
In situations where you need to avoid conflicts, security concerns, or manage different environments, you may want to map different port numbers between the host machine and the container. This can be useful if the container uses a default port, but you want to expose it on a different port on the host to avoid conflicts.
docker run -p 8081:80 nginx
This command maps port 8081 on the host to port 80 inside the container. Here, the container is still running its web server on port 80, but it is exposed on port 8081 on the host machine.
By default, Docker maps TCP ports. However, you can also map UDP ports if your application uses UDP. This is common for protocols and applications that require low latency, real-time communication, or broadcast-based communication.
For example, DNS uses UDP for query and response communication due to its speed and low overhead. If you are running a DNS server inside a Docker container, you would need to map UDP ports.
docker run -p 53:53/udp ubuntu/bind9
Here this command maps UDP port 53 on the host to UDP port 53 inside the container.
Once you have set up port mapping, you may want to verify that itβs working as expected. Docker provides several tools for inspecting and troubleshooting port mappings.
To list all active containers and see their port mappings, use the docker ps
command. The output includes a PORTS
column that shows the mapping between the host and container ports.
docker ps
This might output something like:
If you more detailed information about a container's port mappings, you can use docker inspect. This command gives you a JSON output with detailed information about the container's configuration.
docker inspect <container_id> | grep "Host"
This command will display the port mappings, such as:
When you are first learning Docker, one of the more tricky topics is often networking and port mapping. I hope this rundown has helped clarify how port mapping works and how you can effectively use it to connect your containers to the outside world and orchestrate services across different environments.
No blog entries yet
Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.