
- 0 Comments
- 242 views
π» Where tech meets community.
Hello, Guest! π
You're just a few clicks away from joining an exclusive space for tech enthusiasts, problem-solvers, and lifelong learners like you.
π Why Join?
By becoming a member of CodeNameJessica, youβll get access to:
β
In-depth discussions on Linux, Security, Server Administration, Programming, and more
β
Exclusive resources, tools, and scripts for IT professionals
β
A supportive community of like-minded individuals to share ideas, solve problems, and learn together
β
Project showcases, guides, and tutorials from our members
β
Personalized profiles and direct messaging to collaborate with other techies
π Sign Up Now and Unlock Full Access!
As a guest, you're seeing just a glimpse of what we offer. Don't miss out on the complete experience! Create a free account today and start exploring everything CodeNameJessica has to offer.
March 19, 2025 | 08:00 AM PDT (UTC-7)
You wonβt believe how fast this is! Join us for an insightful webinar on leveraging CPUs for machine learning inference using the recently released, open source KleidiAI library. Discover how KleidiAI’s optimized micro-kernels are already being adopted by popular ML frameworks like PyTorch, enabling developers to achieve amazing inference performance without GPU acceleration. We’ll discuss the key optimizations available in KleidiAI, review real-world use cases, and demonstrate how to get started with ease in a fireside chat format, ensuring you stay ahead in the ML space and harness the full potential of CPUs already in consumer hands. This Linux Foundation Education webinar is supported under the Semiconductor Education Alliance and sponsored by Arm.
The post Learn how easy it is to leverage CPUs for machine learning with our free webinar appeared first on Linux.com.
Linux YouTuber Brodie Robertson liked It's FOSS' April Fool joke so much that he made a detailed video on it. It's quite fun to watch, actually 😄
💬 Let's see what else you get in this edition
❇️ Future-Proof Your Cloud Storage With Post-Quantum Encryption
Get 82% off any Internxt lifetime plan—a one-time payment for private, post-quantum encrypted cloud storage.
No subscriptions, no recurring fees, 30-day money back policy.
The APT 3.0 release has finally arrived with a better user experience.
Mozilla has begun the initial implementation of AI features into Firefox.
Desktop Linux is mostly neglected by the industry but loved by the community. For the past 12 years, It's FOSS has been helping people use Linux on their personal computers. And we are now facing the existential threat from AI models stealing our content.
If you like what we do and would love to support our work, please become It's FOSS Plus member. It costs $24 a year (less than the cost of a burger meal each month) and you get an ad-free reading experience with the satisfaction of helping the desktop Linux community.
This time, we have a DIY biosignal tool that can be used for neuroscience research and education purposes.
Clapgrep is a powerful open source search tool for Linux.
See the new features in APT 3.0 in action in our latest video.
Take a trip down memory lane with our 80s Nostalgic Gadgets puzzle.
How sharp is your Git knowledge? Our latest crossword will test your knowledge.
In Firefox, you can delete temporary browsing data using the "Forget" button. First, right-click on the toolbar and select "Customize Toolbar".
Now, from the list, drag and drop the "Forget" button to the toolbar. If you click on it, you will be asked to clear 5 min, 2 hrs, and 24 hrs of browsing data, pick any one of them and click on "Forget!".
The glow up is real with this one. 🤭
On April 7, 1964, IBM introduced the System/360, the first family of computers designed to be fully compatible with each other. Unlike earlier systems, where each model had its own unique software and hardware.
One of our regular FOSSers played around with ARM64 on Linux and liked it.
Share it with your Linux-using friends and encourage them to subscribe (hint: it's here).
Share the articles in Linux Subreddits and community forums.
Follow us on Google News and stay updated in your News feed.
Opt for It's FOSS Plus membership and support us 🙏
Enjoy FOSS 😄
Latest entry by Jessica Brown,
On Thursday, February 6, 2025, multiple Cloudflare services, including R2 object storage, experienced a significant outage lasting 59 minutes. This incident resulted in complete operational failures against R2 and disruptions to dependent services such as Stream, Images, Cache Reserve, Vectorize, and Log Delivery. The root cause was traced to human error and inadequate validation safeguards during routine abuse remediation procedures.
Incident Duration: 08:14 UTC to 09:13 UTC (primary impact), with residual effects until 09:36 UTC.
Primary Issue: Disabling of the R2 Gateway service, responsible for the R2 API.
Data Integrity: No data loss or corruption occurred within R2.
R2: 100% failure of operations (uploads, downloads, metadata) during the outage. Minor residual errors (<1%) post-recovery.
Stream: Complete service disruption during the outage.
Images: Full impact on upload/download; delivery minimally affected (97% success rate).
Cache Reserve: Increased origin requests, impacting <0.049% of cacheable requests.
Log Delivery: Delays and data loss (up to 4.5% for non-R2, 13.6% for R2 jobs).
Durable Objects: 0.09% error rate spike post-recovery.
Cache Purge: 1.8% error rate increase, 10x latency during the incident.
Vectorize: 75% query failures, 100% insert/upsert/delete failures during the outage.
Key Transparency Auditor: Complete failure of publish/read operations.
Workers & Pages: Minimal deployment failures (0.002%) for projects with R2 bindings.
08:12 UTC: R2 Gateway service inadvertently disabled.
08:14 UTC: Service degradation begins.
08:25 UTC: Internal incident declared.
08:42 UTC: Root cause identified.
08:57 UTC: Operations team begins re-enabling the R2 Gateway.
09:10 UTC: R2 starts to recover.
09:13 UTC: Primary impact ends.
09:36 UTC: Residual error rates recover.
10:29 UTC: Incident officially closed after monitoring.
The incident stemmed from human error during a phishing site abuse report remediation. Instead of targeting a specific endpoint, actions mistakenly disabled the entire R2 Gateway service. Contributing factors included:
Lack of system-level safeguards.
Inadequate account tagging and validation.
Limited operator training on critical service disablement risks.
Content Delivery Networks (CDNs) play a vital role in improving website performance, scalability, and security. However, relying heavily on CDNs for critical systems can introduce significant risks when outages occur:
Lost Revenue: Downtime on e-commerce platforms or SaaS services can result in immediate lost sales and financial transactions, directly affecting revenue streams.
Lost Data: Although R2 did not suffer data loss in this incident, disruptions in data transmission processes can lead to lost or incomplete data, especially in logging and analytics services.
Lost Customers: Extended or repeated outages can erode customer trust and satisfaction, leading to churn and damage to brand reputation.
Operational Disruptions: Businesses relying on real-time data processing or automated workflows may face cascading failures when critical CDN services are unavailable.
Immediate Actions:
Deployment of additional guardrails in the Admin API.
Disabling high-risk manual actions in the abuse review UI.
In-Progress Measures:
Improved internal account provisioning.
Restricting product disablement permissions.
Implementing two-party approval for critical actions.
Enhancing abuse checks to prevent internal service disruptions.
Cloudflare acknowledges the severity of this incident and the disruption it caused to customers. We are committed to strengthening our systems, implementing robust safeguards, and ensuring that similar incidents are prevented in the future.
For more information about Cloudflare's services or to explore career opportunities, visit our website.
Latest entry by Blogger,
By this point, it’s not a secret to most people that I like Tailwind.
But, unknown to many people (who often jump to conclusions when you mention Tailwind), I don’t like vanilla Tailwind. In fact, I find most of it horrible and I shall refrain from saying further unkind words about it.
But I recognize and see that Tailwind’s methodology has merits β lots of them, in fact β and they go a long way to making your styles more maintainable and performant.
Today, I want to explore one of these merit-producing features that has been severely undersold β Tailwind’s @apply
feature.
@apply
doesTailwind’s @apply
features lets you “apply” (or simply put, copy-and-paste) a Tailwind utility into your CSS.
Most of the time, people showcase Tailwind’s @apply
feature with one of Tailwind’s single-property utilities (which changes a single CSS declaration). When showcased this way, @apply
doesn’t sound promising at all. It sounds downright stupid. So obviously, nobody wants to use it.
/* Input */
.selector {
@apply p-4;
}
/* Output */
.selector {
padding: 1rem;
}
To make it worse, Adam Wathan recommends against using @apply
, so the uptake couldn’t be worse.
Confession: The `apply` feature in Tailwind basically only exists to trick people who are put off by long lists of classes into trying the framework.
— Adam Wathan (@adamwathan) February 9, 2020
You should almost never use it 😬
Reuse your utility-littered HTML instead.https://t.co/x6y4ksDwrt
Personally, I think Tailwind’s @apply
feature is better than described.
@apply
is like Sass’s @includes
If you have been around during the time where Sass is the dominant CSS processing tool, you’ve probably heard of Sass mixins. They are blocks of code that you can make β in advance β to copy-paste into the rest of your code.
@mixin
@includes
// Defining the mixin
@mixin some-mixin() {
color: red;
background: blue;
}
// Using the mixin
.selector {
@include some-mixin();
}
/* Output */
.selector {
color: red;
background: blue;
}
Tailwind’s @apply
feature works the same way. You can define Tailwind utilities in advance and use them later in your code.
/* Defining the utility */
@utility some-utility {
color: red;
background: blue;
}
/* Applying the utility */
.selector {
@apply some-utility;
}
/* Output */
.selector {
color: red;
background: blue;
}
Tailwindβs utilities can be used directly in the HTML, so you donβt have to write a CSS rule for it to work.
@utility some-utility {
color: red;
background: blue;
}
<div class="some-utility">...</div>
On the contrary, for Sass mixins, you need to create an extra selector to house your @includes
before using them in the HTML. That’s one extra step. Many of these extra steps add up to a lot.
@mixin some-mixin() {
color: red;
background: blue;
}
.selector {
@include some-mixin();
}
/* Output */
.selector {
color: red;
background: blue;
}
<div class="selector">...</div>
Tailwindβs utilities can also be used with their responsive variants. This unlocks media queries straight in the HTML and can be a superpower for creating responsive layouts.
<div class="utility1 md:utility2">β¦</div>
One of my favorite β and most easily understood β examples of all time is a combination of two utilities that Iβve built for Splendid Layouts (a part of Splendid Labz):
vertical
: makes a vertical layouthorizontal
: makes a horizontal layoutDefining these two utilities is easy.
vertical
, we can use flexbox with flex-direction
set to column
.horizontal
, we use flexbox with flex-direction
set to row
.@utility horizontal {
display: flex;
flex-direction: row;
gap: 1rem;
}
@utility vertical {
display: flex;
flex-direction: column;
gap: 1rem;
}
After defining these utilities, we can use them directly inside the HTML. So, if we want to create a vertical layout on mobile and a horizontal one on tablet or desktop, we can use the following classes:
<div class="vertical sm:horizontal">...</div>
For those who are new to Tailwind, sm:
here is a breakpoint variant that tells Tailwind to activate a class when it goes beyond a certain breakpoint. By default, sm
is set to 640px
, so the above HTML produces a vertical layout on mobile, then switches to a horizontal layout at 640px
.
If you prefer traditional CSS over composing classes like the example above, you can treat @apply
like Sass @includes
and use them directly in your CSS.
<div class="your-layout">...</div>
.your-layout {
@apply vertical;
@media (width >= 640px) {
@apply horizontal;
}
}
The beautiful part about both of these approaches is you can immediately see what’s happening with your layout β in plain English β without parsing code through a CSS lens. This means faster recognition and more maintainable code in the long run.
Sass mixins are more powerful than Tailwind utilities because:
@if
and @for
loops.@mixin avatar($size, $circle: false) {
width: $size;
height: $size;
@if $circle {
border-radius: math.div($size, 2);
}
}
On the other hand, Tailwind utilities don’t have these powers. At the very maximum, Tailwind can let you take in one variable through their functional utilities.
/* Tailwind Functional Utility */
@utility tab-* {
tab-size: --value(--tab-size-*);
}
Fortunately, we’re not affected by this “lack of power” much because we can take advantage of all modern CSS improvements β including CSS variables. This gives you a ton of room to create very useful utilities.
A second example I often like to showcase is the grid-simple
utility that lets you create grids with CSS Grid easily.
We can declare a simple example here:
@utility grid-simple {
display: grid;
grid-template-columns: repeat(var(--cols), minmax(0, 1fr));
gap: var(--gap, 1rem);
}
By doing this, we have effectively created a reusable CSS grid (and we no longer have to manually declare minmax
everywhere).
After we have defined this utility, we can use Tailwind’s arbitrary properties to adjust the number of columns on the fly.
<div class="grid-simple [--cols:3]">
<div class="item">...</div>
<div class="item">...</div>
<div class="item">...</div>
</div>
To make the grid responsive, we can add Tailwind’s responsive variants with arbitrary properties so we only set --cols:3
on a larger breakpoint.
<div class="grid-simple sm:[--cols:3]">
<div class="item">...</div>
<div class="item">...</div>
<div class="item">...</div>
</div>
This makes your layouts very declarative. You can immediately tell whatβs going on when you read the HTML.
Now, on the other hand, if you’re uncomfortable with too much Tailwind magic, you can always use @apply
to copy-paste the utility into your CSS. This way, you don’t have to bother writing repeat
and minmax
declarations every time you need a grid that grid-simple
can create.
.your-layout {
@apply grid-simple;
@media (width >= 640px) {
--cols: 3;
}
}
<div class="your-layout"> ... </div>
By the way, using @apply
this way is surprisingly useful for creating complex layouts! But that seems out of scope for this article so I’ll be happy to show you an example another day.
Tailwind’s utilities are very powerful by themselves, but they’re even more powerful if you allow yourself to use @apply
(and allow yourself to detach from traditional Tailwind advice). By doing this, you gain access to Tailwind as a tool instead of it being a dogmatic approach.
To make Tailwind’s utilities even more powerful, you might want to consider building utilities that can help you create layouts and nice visual effects quickly and easily.
I’ve built a handful of these utilities for Splendid Labz and I’m happy to share them with you if you’re interested! Just check out Splendid Layouts to see a subset of the utilities I’ve prepared.
By the way, the utilities I showed you above are watered-down versions of the actual ones I’m using in Splendid Labz.
One more note: When writing this, Splendid Layouts work with Tailwind 3, not Tailwind 4. I’m working on a release soon, so sign up for updates if you’re interested!
Tailwind’s @apply Feature is Better Than it Sounds originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Latest entry by Blogger,
Port management in Docker and Docker Compose is essential to properly expose containerized services to the outside world, both in development and production environments.
Understanding how port mapping works helps avoid conflicts, ensures security, and improves network configuration.
This tutorial will walk you understand how to configure and map ports effectively in Docker and Docker Compose.
Port mapping exposes network services running inside a container to the host, to other containers on the same host, or to other hosts and network devices. It allows you to map a specific port from the host system to a port on the container, making the service accessible from outside the container.
In the schematic below, there are two separate services running in two containers and both use port 80. Now, their ports are mapped with hosts port 8080 and 8090 and thus they are accessible from outside using these two ports.
Typically, a running container has its own isolated network namespace with its own IP address. By default, containers can communicate with each other and with the host system, but external network access is not automatically enabled.
Port mapping is used to create communication between the container's isolated network and the host system's network.
For example, let's map Nginx to port 80:
docker run -d --publish 8080:80 nginx
The --publish
command (usually shortened to -p) is what allows us to create that association between the local port (8080) and the port of interest to us in the container (80).
In this case, to access it, you simply use a web browser and access http://localhost:8080
On the other hand, if the image you are using to create the container has made good use of the EXPOSE instructions, you can use the command in this other way:
docker run -d --publish-all hello-world
Docker takes care of choosing a random port (instead of the port 80 or other specified ports) on your machine to map with those specified in the Dockerfile:
Docker Compose allows you to define container configurations in a docker-compose.yml
. To map ports, you use the ports
YAML directive.
version: '3.8'
services:
web:
image: nginx:latest
ports:
- "8080:80"
In this example, as in the previous case, the Nginx container will expose port 80 on the host's port 8080.
It is important not to confuse the use of ports
with expose
directives. The former creates a true port forwarding to the outside. The latter only serves to document that an internal port is being used by the container, but does not create any exposure to the host.
services:
app:
image: myapp
expose:
- "3000"
In this example, port 3000 will only be accessible from other containers in the same Docker network, but not from outside.
You just saw how to map a single port, but Docker also allows you to map more than one port at a time. This is useful when your container needs to expose multiple services on different ports.
Let's configure a nginx server to work in a dual stack environment:
docker run -p 8080:80 -p 443:443 nginx
Now the server to listen for both HTTP traffic on port 8080, mapped to port 80 inside the container and HTTPS traffic on port 443, mapped to port 443 inside the container.
By default, Docker binds container ports to all available IP addresses on the host machine. If you need to bind a port to a specific IP address on the host, you can specify that IP in the command. This is useful when you have multiple network interfaces or want to restrict access to specific IPs.
docker run -p 192.168.1.100:8080:80 nginx
This command binds port 8080 on the specific IP address 192.168.1.100
to port 80 inside the container.
Sometimes, you may need to map a range of ports instead of a single port. Docker allows this by specifying a range for both the host and container ports. For example,
docker run -p 5000-5100:5000-5100 nginx
This command maps a range of ports from 5000 to 5100 on the host to the same range inside the container. This is particularly useful when running services that need multiple ports, like a cluster of servers or applications with several endpoints.
In situations where you need to avoid conflicts, security concerns, or manage different environments, you may want to map different port numbers between the host machine and the container. This can be useful if the container uses a default port, but you want to expose it on a different port on the host to avoid conflicts.
docker run -p 8081:80 nginx
This command maps port 8081 on the host to port 80 inside the container. Here, the container is still running its web server on port 80, but it is exposed on port 8081 on the host machine.
By default, Docker maps TCP ports. However, you can also map UDP ports if your application uses UDP. This is common for protocols and applications that require low latency, real-time communication, or broadcast-based communication.
For example, DNS uses UDP for query and response communication due to its speed and low overhead. If you are running a DNS server inside a Docker container, you would need to map UDP ports.
docker run -p 53:53/udp ubuntu/bind9
Here this command maps UDP port 53 on the host to UDP port 53 inside the container.
Once you have set up port mapping, you may want to verify that itβs working as expected. Docker provides several tools for inspecting and troubleshooting port mappings.
To list all active containers and see their port mappings, use the docker ps
command. The output includes a PORTS
column that shows the mapping between the host and container ports.
docker ps
This might output something like:
If you more detailed information about a container's port mappings, you can use docker inspect. This command gives you a JSON output with detailed information about the container's configuration.
docker inspect <container_id> | grep "Host"
This command will display the port mappings, such as:
When you are first learning Docker, one of the more tricky topics is often networking and port mapping. I hope this rundown has helped clarify how port mapping works and how you can effectively use it to connect your containers to the outside world and orchestrate services across different environments.
No blog entries yet
Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.