-
Posts
84 -
Joined
-
Last visited
Never
About Blogger
- Birthday 11/24/2024
Recent Profile Visitors
78 profile views
Blogger's Achievements
-
QR Codes and Linux: Bridging Open-Source Technology with Seamless Connectivity
Blogger posted a blog entry in Linux News
By: Janus Atienza Thu, 09 Jan 2025 17:34:55 +0000 QR codes have revolutionized how we share information, offering a fast and efficient way to connect physical and digital worlds. In the Linux ecosystem, the adaptability of QR codes aligns seamlessly with the open-source philosophy, enabling developers, administrators, and users to integrate QR code functionality into various workflows. Leveraging a qr code generator free can simplify this process, making it accessible even for those new to the technology. From system administration to enhancing user interfaces, using QR codes in Linux environments is both practical and innovative. QR Codes on Linux: Where and How They Are Used QR codes serve diverse purposes in Linux systems, providing solutions that enhance functionality and user experience. For instance, Linux administrators can generate QR codes to link to system logs or troubleshooting guides, offering easy access during remote sessions. In secure file sharing, QR codes can embed links to files, enabling safe resource sharing without exposing the system to vulnerabilities. Additionally, Linux’s prevalence in IoT device management is complemented by QR codes, which simplify pairing and configuring devices. Teachers and learners attach QR codes to scripts, tutorials, or resources in education, ensuring quick access to valuable materials. These examples demonstrate how QR codes integrate seamlessly into Linux workflows to improve efficiency and usability. How to Generate QR Codes on Linux Linux users have several methods to create QR codes, from terminal-based commands to online tools like me-qr.com, which offer user-friendly interfaces. Here’s a list of ways to generate QR codes within Linux environments: Automate QR code generation with cron jobs for time-sensitive data. Encode secure access tokens or one-time passwords in QR codes. Store Linux commands in QR codes for quick scanning and execution. Use QR codes for encrypted messages using tools. Create QR codes linking to installation scripts or system resources. In Linux environments, QR codes are not limited to traditional uses. For instance, remote server management becomes more secure with QR codes containing SSH keys or login credentials, allowing encrypted device connections. Similarly, QR codes can be used in disaster recovery processes to store encryption keys or recovery instructions. For Linux-based applications, developers embed QR codes into app interfaces to direct users to support pages or additional features, decluttering the UI. Additionally, collaborative workflows benefit from QR codes directly linking to Git repositories, enabling seamless project sharing among teams. These creative applications illustrate the versatility of QR codes in enhancing functionality and security within Linux systems. The Open-Source Potential of QR Codes on Linux As Linux continues to power diverse applications, from servers to IoT devices, QR codes add a layer of simplicity and connectivity. Whether you’re looking to generate QR code free for file sharing or embed codes into an application, Linux users have a wealth of options at their fingertips. Platforms like me-qr.com provide an intuitive and accessible way to create QR codes, while command-line tools offer flexibility for advanced users. With their ability to streamline workflows and enhance user experiences, QR codes are an indispensable asset in the Linux ecosystem. Let the power of open-source meet the versatility of QR codes, and watch your Linux environment transform into a hub of connectivity and innovation. The post QR Codes and Linux: Bridging Open-Source Technology with Seamless Connectivity appeared first on Unixmen. -
Tight Mode: Why Browsers Produce Different Performance Results
Blogger posted a blog entry in Programmer's Corner
by: Geoff Graham Thu, 09 Jan 2025 16:16:15 +0000 I wrote a post for Smashing Magazine that was published today about this thing that Chrome and Safari have called “Tight Mode” and how it impacts page performance. I’d never heard the term until DebugBear’s Matt Zeunert mentioned it in a passing conversation, but it’s a not-so-new deal and yet there’s precious little documentation about it anywhere. So, Matt shared a couple of resources with me and I used those to put some notes together that wound up becoming the article that was published. In short: The implications are huge, as it means resources are not treated equally at face value. And yet the way Chrome and Safari approach it is wildly different, meaning the implications are wildly different depending on which browser is being evaluated. Firefox doesn’t enforce it, so we’re effectively looking at three distinct flavors of how resources are fetched and rendered on the page. It’s no wonder web performance is a hard discipline when we have these moving targets. Sure, it’s great that we now have a consistent set of metrics for evaluating, diagnosing, and discussing performance in the form of Core Web Vitals — but those metrics will never be consistent from browser to browser when the way resources are accessed and prioritized varies. Tight Mode: Why Browsers Produce Different Performance Results originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter. -
Installing Listmonk - Self-hosted Newsletter and Mailing List Manager
Blogger posted a blog entry in Linux Tips
As a tech-enthusiast content creator, I'm always on the lookout for innovative ways to connect with my audience and share my passion for technology and self-sufficiency. But as my newsletter grew in popularity, I found myself struggling with the financial burden of relying on external services like Mailgun - a problem many creators face when trying to scale their outreach efforts without sacrificing quality. That's when I discovered Listmonk, a free and open-source mailing list manager that not only promises high performance but also gives me complete control over my data. In this article, I'll walk you through how I successfully installed and deployed Listmonk locally using Docker, sharing my experiences and lessons learned along the way. I used Linode's cloud server to test the scenario. You may try either of Linode or DigitalOcean or your own servers. Customer Referral Landing Page - $100Cut Your Cloud Bills in Half Deploy more with Linux virtual machines, global infrastructure, and simple pricing. No surprise bills, no lock-in, and theLinodeGet started on Linode with a $100, 60-day credit for new users. DigitalOcean – The developer cloudHelping millions of developers easily build, test, manage, and scale applications of any size – faster than ever before.Explore our productsGet started on DigitalOcean with a $100, 60-day credit for new users. PrerequisitesBefore diving into the setup process, make sure you have the following: Docker and Docker Compose installed on your server.A custom domain that you want to use for Listmonk.Basic knowledge of shell commands and editing configuration files.If you are absolutely new to Docker, we have a course just for you: Learn Docker: Complete Beginner’s CourseLearn Docker, an important skill to have for any DevOps and modern sysadmin. Learn all the essentials of Docker in this series.Linux HandbookAbdullah TarekStep 1: Set up the project directoryThe first thing you need to do is create the directory where you'll store all the necessary files for Listmonk, I like an organized setup (helps in troubleshooting). In your terminal, run: mkdir listmonk cd listmonk This will set up a dedicated directory for Listmonk’s files. Step 2: Create the Docker compose fileListmonk has made it incredibly easy to get started with Docker. Their official documentation provides a detailed guide and even a sample docker-compose.yml file to help you get up and running quickly. Download the sample file to the current directory: curl -LO https://github.com/knadh/listmonk/raw/master/docker-compose.yml Here is the sample docker-compose.yml file, I tweaked some default environment variables: 💡It's crucial to keep your credentials safe! Store them in a separate .env file, not hardcoded in your docker-compose.yml. I know, I know, I did it for this tutorial... but you're smarter than that, right? 😉For most users, this setup should be sufficient but you can always tweak settings to your own needs. then run the container in the background: docker compose up -dOnce you've run these commands, you can access Listmonk by navigating to http://localhost:9000 in your browser. Setting up SSLBy default, Listmonk runs over HTTP and doesn’t include built-in SSL support. It is kinda important if you are running any service these days. So the next thing we need to do is to set up SSL support. While I personally prefer using Cloudflare Tunnels for SSL and remote access, this tutorial will focus on Caddy for its straightforward integration with Docker. Start by creating a folder named caddy in the same directory as your docker-compose.yml file: mkdir caddyInside the caddy folder, create a file named Caddyfile with the following content:th the following contents: listmonk.example.com { reverse_proxy app:9000 }Replace listmonk.example.com with your actual domain name. This tells Caddy to proxy requests from your domain to the Listmonk service running on port 9000. Ensure your domain is correctly configured in DNS. Add an A record pointing to your server's IP address (in my case, the Linode server's IP). If you’re using Cloudflare, set the proxy status to DNS only during the initial setup to let Caddy handle SSL certificates. Next, add the Caddy service to your docker-compose.yml file. Here’s the configuration to include: caddy: image: caddy:latest restart: unless-stopped container_name: caddy ports: - 80:80 - 443:443 volumes: - ./caddy/Caddyfile:/etc/caddy/Caddyfile - ./caddy/caddy_data:/data - ./caddy/caddy_config:/config networks: - listmonkThis configuration sets up Caddy to handle HTTP (port 80) and HTTPS (port 443) traffic, automatically obtain SSL certificates, and reverse proxy requests to the Listmonk container. Finally, restart your containers to apply the new settings: docker-compose restartOnce the containers are up and running, navigate to your domain (e.g., https://listmonk.example.com) in a browser. Caddy will handle the SSL certificate issuance and proxy the traffic to Listmonk seamlessly. Step 3: Accessing Listmonk webUIOnce Listmonk is up and running, it’s time to access the web interface and complete the initial setup. Open your browser and navigate to your domain or IP address where Listmonk is hosted. If you’ve configured HTTPS, the URL should look something like this: https://listmonk.yourdomain.com and you’ll be greeted with the login page. Click Login to proceed. Creating the admin userOn the login screen, you’ll be prompted to create an administrator account. Enter your email address, a username, and a secure password, then click Continue. This account will serve as the primary admin for managing Listmonk. Configure general settingsOnce logged in, navigate to Settings > Settings in the left sidebar. Under the General tab, customize the following: Site Name: Enter a name for your Listmonk instance.Root URL: Replace the default http://localhost:9000 with your domain (e.g., https://listmonk.yourdomain.com).Admin Email: Add an email address for administrative notifications.Click Save to apply these changes. Configure SMTP settingsTo send emails, you’ll need to configure SMTP settings: Click on the SMTP tab in the settings.Fill in the details:Host: smtp.emailhost.comPort: 465Auth Protocol: LoginUsername: Your email addressPassword: Your email password (or Gmail App password, generated via Google’s security settings)TLS: SSL/TLSClick Save to confirm the settings.Create a new campaign listNow, let’s create a list to manage your subscribers: Go to All Lists in the left sidebar and click + New.Give your list a name, set it to Public, and choose between Single Opt-In or Double Opt-In.Add a description, then click Save.Your newsletter subscription form will now be available at: https://listmonk.yourdomain.com/subscription/form With everything set up and running smoothly, it’s time to put Listmonk to work. You can easily import your existing subscribers, customize the look and feel of your emails, and even change the logo to match your brand. Final thoughtsAnd that’s it! You’ve successfully set up Listmonk, configured SMTP, and created your first campaign list. From here, you can start sending newsletters and growing your audience. I’m currently testing Listmonk for my own newsletter solution on my website, and while it’s a robust solution, I’m curious to see how it performs in a production environment. That said, I’m genuinely impressed by the thought and effort that Kailash Nadh and the contributors have put into this software, it’s a remarkable achievement. For any questions or challenges you encounter, the Listmonk GitHub page is an excellent resource and the developers are highly responsive. Finally, I’d love to hear your thoughts! Share your feedback, comments, or suggestions below. I’d love to hear about your experience with Listmonk and how you’re using it for your projects. Happy emailing! 📨 https://linuxhandbook.com/content/images/2025/01/listmon-self-hosting.png -
by: Chris Coyier Mon, 06 Jan 2025 20:47:37 +0000 Like Miriam Suzanne says: I like the idea of controlling my own experience when browsing and using the web. Bump up that default font size, you’re worth it. Here’s another version of control. If you publish a truncated RSS feed on your site, but the site itself has more content, I reserve the right to go fetch that content and read it through a custom RSS feed. I feel like that’s essentially the same thing as if I had an elaborate user stylesheet that I applied just to that website that made it look how I wanted it to look. It would be weird to be anti user-stylesheet. I probably don’t take enough control over my own experience on sites, really. Sometimes it’s just a time constraint where I don’t have the spoons to do a bunch of customization. But the spoon math changes when it has to do with doing my job better. I was thinking about this when someone poked me that an article I published had a wrong link in it. As I was writing it in WordPress, somehow I linked the link to some internal admin screen URL instead of where I was trying to link to. Worse, I bet I’ve made that same mistake 10 times this year. I don’t know what the heck the problem is (some kinda fat finger issue, probably) but the same problem is happening too much. What can help? User stylesheets can help! I love it when CSS helps me do my job in weird subtle ways better. I’ve applied this CSS now: .editor-visual-editor a[href*="/wp-admin/"]::after { content: " DERP!"; color: red; } That first class is just something to scope down the editor area in WordPress, then I select any links that have “wp-admin” in them, which I almost certainly do not want to be linking to, and show a visual warning. It’s a little silly, but it will literally work to stop this mistake I keep making. I find it surprising that only Safari has entirely native support for a linking up your own user CSS, but there are ways to do it via extension or other features in all browsers. Welp now that we’re talking about CSS I can’t help but share some of my favorite links in that area now. Dave put his finger on an idea I’m wildly jealous of: CSS wants to be a system. Yes! It so does! CSS wants to be a system! Alone, it’s just selectors, key/value pairs, and a smattering of other features. It doesn’t tell you how to do it, it is lumber and hardware saying build me into a tower! And also: do it your way! And the people do. Some people’s personality is: I have made this system, follow me, disciples, and embrace me. Other people’s personality is: I have also made a system, it is mine, my own, my prec… please step back behind the rope. Annnnnnd more. CSS Surprise Manga Lines from Alvaro are fun and weird and clever. Whirl: “CSS loading animations with minimal effort!” Jhey’s got 108 of them open sourced so far (like, 5 years ago, but I’m just seeing it.) Next-level frosted glass with backdrop-filter. Josh covers ideas (with credit all the way back to Jamie Gray) related to the “blur the stuff behind it” look. Yes, backdrop-filter does the heavy lifting, but there are SO MANY DETAILS to juice it up. Custom Top and Bottom CSS Container Masks from Andrew is a nice technique. I like the idea of a “safe” way to build non-rectangular containers where the content you put inside is actually placed safely.
-
The Importance of Investing in Soft Skills in the Age of AI
Blogger posted a blog entry in Programmer's Corner
by: Andy Bell Mon, 06 Jan 2025 14:58:46 +0000 I’ll set out my stall and let you know I am still an AI skeptic. Heck, I still wrap “AI” in quotes a lot of the time I talk about it. I am, however, skeptical of the present, rather than the future. I wouldn’t say I’m positive or even excited about where AI is going, but there’s an inevitability that in development circles, it will be further engrained in our work. We joke in the industry that the suggestions that AI gives us are more often than not, terrible, but that will only improve in time. A good basis for that theory is how fast generative AI has improved with image and video generation. Sure, generated images still have that “shrink-wrapped” look about them, and generated images of people have extra… um… limbs, but consider how much generated AI images have improved, even in the last 12 months. There’s also the case that VC money is seemingly exclusively being invested in AI, industry-wide. Pair that with a continuously turbulent tech recruitment situation, with endless major layoffs and even a skeptic like myself can see the writing on the wall with how our jobs as developers are going to be affected. The biggest risk factor I can foresee is that if your sole responsibility is to write code, your job is almost certainly at risk. I don’t think this is an imminent risk in a lot of cases, but as generative AI improves its code output — just like it has for images and video — it’s only a matter of time before it becomes a redundancy risk for actual human developers. Do I think this is right? Absolutely not. Do I think it’s time to panic? Not yet, but I do see a lot of value in evolving your skillset beyond writing code. I especially see the value in improving your soft skills. What are soft skills? A good way to think of soft skills is that they are life skills. Soft skills include: communicating with others, organizing yourself and others, making decisions, and adapting to difficult situations. I believe so much in soft skills that I call them core skills and for the rest of this article, I’ll refer to them as core skills, to underline their importance. The path to becoming a truly great developer is down to more than just coding. It comes down to how you approach everything else, like communication, giving and receiving feedback, finding a pragmatic solution, planning — and even thinking like a web developer. I’ve been working with CSS for over 15 years at this point and a lot has changed in its capabilities. What hasn’t changed though, is the core skills — often called “soft skills” — that are required to push you to the next level. I’ve spent a large chunk of those 15 years as a consultant, helping organizations — both global corporations and small startups — write better CSS. In almost every single case, an improvement of the organization’s core skills was the overarching difference. The main reason for this is a lot of the time, the organizations I worked with coded themselves into a corner. They’d done that because they just plowed through — Jira ticket after Jira ticket — rather than step back and question, “is our approach actually working?” By focusing on their team’s core skills, we were often — and very quickly — able to identify problem areas and come up with pragmatic solutions that were almost never development solutions. These solutions were instead: Improving communication and collaboration between design and development teams Reducing design “hand-off” and instead, making the web-based output the source of truth Moving slowly and methodically to move fast Putting a sharp focus on planning and collaboration between developers and designers, way in advance of production work being started Changing the mindset of “plow on” to taking a step back, thoroughly evaluating the problem, and then developing a collaborative and by proxy, much simpler solution Will improving my core skills actually help? One thing AI cannot do — and (hopefully) never will be able to do — is be human. Core skills — especially communication skills — are very difficult for AI to recreate well because the way we communicate is uniquely human. I’ve been doing this job a long time and something that’s certainly propelled my career is the fact I’ve always been versatile. Having a multifaceted skillset — like in my case, learning CSS and HTML to improve my design work — will only benefit you. It opens up other opportunities for you too, which is especially important with the way the tech industry currently is. If you’re wondering how to get started on improving your core skills, I’ve got you. I produced a course called Complete CSS this year but it’s a slight rug-pull because it’s actually a core skills course that uses CSS as a context. You get to learn some iron-clad CSS skills alongside those core skills too, as a bonus. It’s definitely worth checking out if you are interested in developing your core skills, especially so if you receive a training budget from your employer. Wrapping up The main message I want to get across is developing your core skills is as important — if not more important — than keeping up to date with the latest CSS or JavaScript thing. It might be uncomfortable for you to do that, but trust me, being able to stand yourself out over AI is only going to be a good thing, and improving your core skills is a sure-fire way to do exactly that. The Importance of Investing in Soft Skills in the Age of AI originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter. -
File descriptors are a core concept in Linux and other Unix-like operating systems. They provide a way for programs to interact with files, devices, and other input/output (I/O) resources. Simply put, a file descriptor is like a "ticket" or "handle" that a program uses to access these resources. Every time a program opens a file or creates an I/O resource (like a socket or pipe), the operating system assigns it a unique number called a file descriptor. This number allows the program to read, write, or perform other operations on the resource. And as we all know, in Linux, almost everything is treated as a file—whether it's a text file, a keyboard input, or even network communication. File descriptors make it possible to handle all these resources in a consistent and efficient way. What Are File Descriptors?A file descriptor is a non-negative integer assigned by your operating system whenever a program opens a file or another I/O resource. It acts as an identifier that the program uses to interact with the resource. For example: When you open a text file, the operating system assigns it a file descriptor (e.g., 3).If you open another file, it gets the next available file descriptor (e.g., 4).These numbers are used internally by the program to perform operations like reading from or writing to the resource. This simple mechanism allows programs to interact with different resources without needing to worry about how these resources are implemented underneath. For instance, whether you're reading from a keyboard or writing to a network socket, you use file descriptors in the same way! The three standard file descriptorsEvery process in Linux starts with three predefined file descriptors: Standard Input (stdin), Standard Output (stdout), and Standard Error (stderr). Here's a brief summary of their use: Descriptor Integer Value Symbolic Constant Purpose stdin 0 STDIN_FILENO Standard input (keyboard input by default) stdout 1 STDOUT_FILENO Standard output (screen output by default) stderr 2 STDERR_FILENO Standard error (error messages by default) Now, let's address each file descriptor with details. 1. Standard Input (stdin)- Descriptor: 0The purpose of the standard input stream is to receive input data. By default, it reads input from the keyboard unless redirected to another source like a file or pipe. Programs use stdin to accept user input interactively or process data from external sources. When you type something into the terminal and press Enter, the data is sent to the program's stdin. This stream can also be redirected to read from files or other programs using shell redirection operators (<). One simple example of stdin would be a script that takes input from the user and prints it: #!/bin/bash # Prompt the user to enter their name echo -n "Enter your name: " # Read the input from the user read name # Print a greeting message echo "Hello, $name!" Here's what the output looks like: But there is another way of using the input stream–redirecting the input itself. You can create a text file and redirect the input stream. For example, here I have created a sample text file named input.txt which contains my name Satoshi. Later I redirected the input stream using <: As you can see, rather than waiting for my input, it took data from the text file and we somewhat automated this. 2. Standard Output (stdout)- Descriptor: 1The standard output stream is used for displaying normal output generated by programs. By default, it writes output to the terminal screen unless redirected elsewhere. In simple terms, programs use stdout to print results or messages. This stream can be redirected to write output to files or other programs using shell operators (> or |). Let's take a simple script that prints a greeting message: #!/bin/bash # Print a message to standard output echo "This is standard output." Here's the simple output (nothing crazy but a decent example): Now, if I want to redirect the output to a file, rather than showing it on the terminal screen, then I can use > as shown here: ./stdout.sh > output.txtAnother good example can be the redirecting output of a command to a text file: ls > output.txt3. Standard Error (stderr)- Descriptor: 2The standard error stream is used for displaying error messages and diagnostics. It is separate from stdout so that errors can be handled independently of normal program output. For better understanding, I wrote a script that will trigger the stderr signal as I have used the exit 1 to mimic a faulty execution: #!/bin/bash # Print a message to standard output echo "This is standard output." # Print an error message to standard error echo "This is an error message." >&2 # Exit with a non-zero status to indicate an error exit 1 But if you were to execute this script, it would simply print "This is an error message." To understand better, you can redirect the output and error to different files. For example, here, I have redirected the error message to stderr.log and the normal output will go into stdout.log: ./stderr.sh > stdout.log 2> stderr.logBonus: Types of limits on file descriptorsLinux kernel puts a limit on the number of file descriptors a process can use. These limits help manage system resources and prevent any single process from using too many. There are different types of limits, each serving a specific purpose. Soft Limits: The default maximum number of file descriptors a process can open. Users can temporarily increase this limit up to the hard limit for their session.Hard Limits: The absolute maximum number of file descriptors a process can open. Only the system admin can increase this limit to ensure system stability.Process-Level Limits: Each process has its own set of file descriptor limits, inherited from its parent process, to prevent any single process from overusing resources.System-Level Limits: The total number of file descriptors available across all processes on the system. This ensures fairness and prevents global resource exhaustion.User-Level Limits: Custom limits set for specific users or groups to allocate resources differently based on their needs.Wrapping Up...In this explainer, I went through what file descriptors are in Linux and shared some practical examples to explain their function. I tried to cover the types of limits in detail but then I had to drop the "detail" to stick to the main idea of this article. But if you want, I can surely write a detailed article on the types of limits on file descriptors. Also, if you have any questions or suggestions, leave us a comment. https://linuxhandbook.com/content/images/2025/01/file-descriptor-in-linux.png
-
by: Abhishek Prakash
-
I don’t like my prompt, i want to change it. it has my username and host, but the formatting is not what i want. This blog will get you started quickly on doing exactly that. This is my current prompt below: To change the prompt you will update .bashrc and set the PS1 environment variable to a new value. Here is a cheatsheet of the prompt options: You can use these placeholders for customization: \u – Username \h – Hostname \w – Current working directory \W – Basename of the current working directory \$ – Shows $ for a normal user and # for the root user \t – Current time (HH:MM:SS) \d – Date (e.g., "Mon Jan 05") \! – History number of the command \# – Command number I want to change my prompt to say Here is my new prompt I am going to use: export PS1="linuxhint@mybox \w: " Can you guess what that does? Yes for my article writing this is exactly what i want. Here is the screenshot: A lot of people will want the Username, Hostname, for my example i don’t! But you can use \u and \h for that. I used \w to show what directory i am in. You can also show date and time, etc. You can also play with setting colors in the prompt with these variables: Foreground Colors: \e[30m – Black \e[31m – Red \e[32m – Green \e[33m – Yellow \e[34m – Blue \e[35m – Magenta \e[36m – Cyan \e[37m – White Background Colors: \e[40m – Black \e[41m – Red \e[42m – Green \e[43m – Yellow \e[44m – Blue \e[45m – Magenta \e[46m – Cyan \e[47m – White Reset Color: \e[0m – Reset to default Here is my colorful version. The backslashes are primarily needed to ensure proper formatting of the prompt and avoid breaking its functionality. export PS1="\[\e[35m\]linuxhint\[\e[0m\]@\[\e[34m\]mybox\[\e[0m\] \[\e[31m\]\w\[\e[0m\]: " This uses Magenta, Blue and Red coloring for different parts of the prompt. Conclusion You can see how to customize your bash prompt with PS1 environment in Ubuntu. Hope this helps you be happy with your environment in linux.
-
by: Abhishek Prakash
-
by: Geoff Graham Mon, 30 Dec 2024 16:15:37 +0000 I’ll be honest: writing this post feels like a chore some years. Rounding up and reflecting on what’s happened throughout the year is somewhat obligatory for a site like this, especially when it’s a tradition that goes back as far as 2007. “Hey, look at all the cool things we did!” This year is different. Much different. I’m more thankful this time around because, last year, I didn’t even get to write this post. At this time last year, I was a full-time student bent on earning a master’s degree while doing part-time contract work. But now that I’m back, writing this feels so, so, so good. There’s a lot more gusto going into my writing when I say: thank you so very much! It’s because of you and your support for this site that I’m back at my regular job. I’d be remiss if I didn’t say that, so please accept my sincerest gratitude and appreciation. Thank you! Let’s tie a bow on this year and round up what happened around here in 2024. Overall traffic Is it worth saying anything about traffic? This site’s pageviews had been trending down since 2020 as it has for just about any blog about front-end dev, but it absolutely cratered when the site was on pause for over a year. Things began moving again in late May, but it was probably closer to mid-June when the engine fully turned over and we resumed regular publishing. And, yes. With regular publishing came a fresh influx of pageviews. Funny how much difference it makes just turning on the lights. All said and done, we had 26 million unique pageviews in 2024. That’s exactly what we had in 2023 as traffic went into a tailspin, so I call it a win that we stopped the bleeding and broke even this year. Publishing A little bit of history when it comes to how many articles we publish each year: 2020: 1,183 articles 2021: 890 articles (site acquired by DigitalOcean) 2022: 390 articles 2023: 0 articles (site paused) 2024: 153 articles (site resumed in late June) Going from 0 articles to 153 (including this one) in six months was no small task. I was the only writer on the team until about October. There are only three of us right now; even then, we’re all extremely part-time workers. Between us and 19 guest authors, I’d say that we outperformed expectations as far as quantity goes — but I’m even more proud of the effort and quality that goes into each one. It’s easy to imagine publishing upwards of 400 articles in 2025 if we maintain the momentum. Case in point: we published a whopping three guides in six months: CSS Anchor Positioning CSS Length Units CSS Selectors That might not sound like a lot, so I’ll put it in context. We published just one guide in 2022 and our goal was to write three in all of 2021. We got three this year alone, and they’re all just plain great. I visit Juan’s Anchor Positioning guide as much as — if not more than — I do the ol’ Flexbox and Grid guides. On top of that, we garnered 34 new additions to the CSS-Tricks Almanac! That includes all of the features for Anchor Positioning and View Transitions, as well as other new features like @starting-style. And the reason spent so much time in the Almanac is because we made some significant… Site updates This is where the bulk of the year was spent, so let’s break things out into digestible chunks. Almanac We refreshed the entire thing! It used to be just selectors and properties, but now we can write about everything from at-rules and functions to pseudos and everything in between. We still need a lot of help in there, so maybe consider guesting writing with us. 😉 Table of Contents We’ve been embedding anchor links to section headings in articles for several years, but it required using a WordPress block and it was fairly limiting as far as placement and customization. Now we generate those links automatically and include a conditional that allows us to toggle it on and off for specific articles. I’m working on an article about how it came together that we’ll publish after the holiday break. Notes There’s a new section where we take notes on what other people are writing about and share our takeaways with you. The motivation was to lower the barrier to writing more freely. Technical writing takes a lot of care and planning that’s at odds with openly learning and sharing. This way, we have a central spot where you can see what we’re learning and join us along the way — such as this set of notes I took from Bramus’ amazing free course on scroll-driven animations. Links This is another area of the site that got a fresh coat of paint. Well, more than paint. It used to be that links were in the same stream as the rest of the articles, tutorials, and guides we publish. Links are meant to be snappy, sharable bits — conversation starters if you will. Breaking them out of the main feed into their own distinguished section helps reduce the noise on this site while giving links a brighter spotlight with a quicker path to get to the original article. Like when there’s a new resource for learning Anchor Positioning, we can shoot that out a lot more easily. Quick Hits We introduced another new piece of content in the form of brief one-liners that you might typically find us posting on Mastodon or Bluesky. We still post to those platforms but now we can write them here on the site and push them out when needed. There’s a lot more flexibility there, even if we haven’t given it a great deal of love just yet. Picks There’s a new feed of the articles we’re reading. It might seem a lot like Links, but the idea is that we can simply “star” something from our RSS reader and it’ll show up in the feed. They’re simply interesting articles that catch our attention that we want to spotlight and share, even if we don’t have any commentary to contribute. This was Chris’ brainchild a few years ago and it feels so good to bring it to fruition. I’ll write something up about it after the break, but you can already head over there. Baseline Status Ooo, this one’s fun! I saw that the Chrome team put out a new web component for embedding web platform browser support information on a page so I set out to make it into a WordPress block we can use throughout the Almanac, which we’re already starting to roll out as content is published or refreshed (such as here in the anchor-name property). I’m still working on a write-up about it, but it’s I’ve already made it available in the WordPress Plugin Directory if you want to grab it for your WordPress site. Or, here… I can simply drop it in and show you. Post Slider This was one of the first things I made when re-joining the team. We wanted to surface a greater number of articles on the homepage so that it’s easier to find specific types of content, whether it’s the latest five articles, the 10 most recently updated Almanac items or guides, classic CSS tricks from ages ago… that sort of thing. So, we got away from merely showing the 10 most recent articles and developed a series of post sliders that pull from different areas of the site. Converting our existing post slider component into a WordPress block made it more portable and a heckuva lot easier to update the homepage — and any other page or post where we might need a post slider. In fact, that’s another one I can demo for you right here… Classic Tricks Timeless CSS gems Article on Oct 6, 2021 Scroll Animation Chris Coyier Article on Oct 6, 2021 Yellow Flash Chris Coyier Article on Oct 6, 2021 Self-Drawing Shapes Chris Coyier Article on Oct 6, 2021 Scroll Shadows Chris Coyier Article on May 20, 2020 Editable Style Blocks Chris Coyier Article on Oct 6, 2021 Scroll Indicator Chris Coyier Article on Mar 15, 2020 Border Triangles Chris Coyier Article on Oct 3, 2021 Pin Scrolling to Bottom Chris Coyier Article on Jul 5, 2021 Infinite Scrolling Background Image Chris Coyier So, yeah. This year was heavier on development than many past years. But everything was done with the mindset of making content easier to find, publish, and share. I hope that this is like a little punch on the gas pedal that accelerates our ability to get fresh content out to you. 2025 Goals I’m quite reluctant to articulate new goals when there are so many things still in flux, but the planner in me can’t help myself. If I can imagine a day at the end of next year when I’m reflecting on things exactly like this, I’d be happy, nay stoked, if I was able to say we did these things: Publish 1-2 new guides. We already have two in the works! That said, the bar for quality is set very high on these, so it’s still a journey to get from planning to publishing two stellar and chunky guides. Fill in the Almanac. My oh my, there is SO much work to do in this little corner of the site. We’ve only got a few pages in the at-rules and functions sections that we recently created and could use all the help we can get. Restart the newsletter. This is something I’ve been itching to do. I know I miss reading the newsletter (especially when Robin was writing it) and this community feels so much smaller and quieter without it. The last issue went out in December 2022 and it’s high time we get it going again. The nuts and bolts are still in place. All we need is a little extra resourcing and the will to do it, and we’ve got at least half of that covered. More guest authors. I mentioned earlier that we’ve worked with 19 guest authors since June of this year. That’s great but also not nearly enough given that this site thrives on bringing in outside voices that we can all learn from. We were clearly busy with development and all kinds of other site updates but I’d like to re-emphasize our writing program this year, with the highest priority going into making it as smooth as possible to submit ideas, receive timely feedback on them, and get paid for what gets published. There’s a lot of invisible work that goes into that but it’s worth everyone’s while because it’s a win-win-win-win (authors win, readers win, CSS-Tricks wins, and DigitalOcean wins). Here’s to 2025! Thank you. That’s the most important thing I want to say. And special thanks to Juan Diego Rodriguez and Ryan Trimble. You may not know it, but they joined the team this Fall and have been so gosh-dang incredibly helpful. I wish every team had a Juan and Ryan just like I do — we’d all be better for it, that’s for sure. I know I learn a heckuva lot from them and I’m sure you will (or are!) as well. Juan Diego Rodriguez Ryan Trimble Give them high-fives when you see them because they deserve it. ✋ Thank You (2024 Edition) originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
by: Bill Dyer During a weekend of tidying up - you know, the kind of chore where you’re knee-deep in old boxes before you realize it. Digging through the dusty cables and old, outdated user manuals, I found something that I had long forgotten: an old Plan9 distribution. Judging by the faded ink and slight warping of the disk sleeve, it had to be from around 1994 or 1995. I couldn’t help but wonder: why had I kept this? Back then, I was curious about Plan9. It was a forward-thinking OS that never quite reached full potential. Holding that disk, however, it felt more like a time capsule, a real reminder of computing’s advancements and adventurous spirit in the 1990s. What Made Plan9 So Intriguing Back Then?In the 1990s, Bell Labs carried an almost mythical reputation for me. I was a C programmer and Unix system administrator and the people at Bell Labs were the minds behind Unix and C, after all. When Plan9 was announced, it felt like the next big thing. Plan9 was an operating system that promised to rethink Unix, not just patch it up. The nerd in me couldn’t resist playing with it. A Peek Inside the DistroBooting up Plan9 wasn’t like loading any other OS. From the minimalist Rio interface to the “everything is a file” philosophy taken to its extreme, it was clear this was something different. Some standout features that left an impression: 9P Protocol: I didn’t grasp its full potential back then, but the idea of treating every resource as part of a unified namespace was extraordinary.Custom Namespaces: The concept of every user having their own view of the system wasn’t just revolutionary; it was downright empowering.Simplicity and Elegance: Even as a die-hard Unix user, I admired Plan9's ability to strip away the cruft without losing functionality.Looking at Plan9 TodayCuriosity got the better of me, and I decided to see if the disk still worked. Spoiler: it didn’t. But thanks to projects like 9front, Plan9 is far from dead. I was able to download and image and fire it up in a VM. The interface hasn't aged well compared to modern GUIs, but its philosophy and design still feels ahead of its time. As a seasoned (read: older) developer, I’ve come to appreciate things I might have overlooked in the 1990s: Efficiency over bloat: In today’s world of resource-hungry systems, Plan9’s lightweight design is like a breath of fresh air.Academic appeal: Its clarity and modularity makes Plan9 and outstanding teaching tool for operating system concepts.Timeless innovations: Ideas like distributed computing and namespace customization feels even more pertinent in this era of cloud computing.Why didn’t Plan9 take off?Plan9 was ahead of its time, which often spells doom for innovative tech. Its radical departure from Unix made it incompatible with existing software. And let’s face it - developers were (and still are) reluctant to ditch well-established ecosystems. Moreover, by the 1990s, Unix clones, such as Linux, were gaining traction. Open-source communities rallied around Linux, leaving Plan9 with a smaller, academic-focused user base. It just didn't have the commercial/user backup. Plan9’s place in the retro-computing sceneI admit it: I can get sappy and nostalgic over tech history. Plan9 is more than a relic; it’s a reminder of a time when operating systems dared to dream big. It never achieved the widespread adoption of Unix or Linux, but it still has a strong following among retro-computing enthusiasts. Here’s why it continues to matter: For Developers: It’s a masterclass in clean, efficient design.For Historians: It’s a snapshot of what computing could have been.For Hobbyists: It’s a fun, low-resource system to tinker with.Check out the 9front project. It’s a maintained fork that modernizes Plan9 while staying true to its roots. Plan9 can run on modern hardware. It is lightweight enough to run on old machines, but I suggest using a VM; it is the easiest route. Lessons from years pastHow a person uses Plan9 is up to them, naturally, but I don't think that Plan9 is practical for everyday use. Plan9, I believe, is better suited as an experimental or educational platform rather than a daily driver. However, that doesn't mean that it wasn't special. Finding that old Plan9 disk wasn’t just a trip down memory lane; it was a reminder of why I was so drawn to computing. Plan9’s ambition and elegance is still inspiring to me, even decades later. So, whether you’re a retro-computing nerd, like me, or just curious about alternative OS designs, give Plan9 a run. Who knows? You might find a little magic in its simplicity, just like I did.
-
by: aiparabellum.com Mon, 30 Dec 2024 02:06:14 +0000 BugFree.ai is a cutting-edge platform designed to help professionals and aspiring candidates prepare for system design and behavioral interviews. Much like Leetcode prepares users for technical coding challenges, BugFree.ai focuses on enhancing your skills in system design and behavioral interviews, making it an indispensable tool for anyone aiming to succeed in technical interviews. This platform offers a unique approach by combining guided learning, real-world scenarios, and hands-on practice to ensure users are well-prepared for their next big interview opportunity. Features of BugFree AI Comprehensive System Design Practice: BugFree.ai provides an extensive range of system design problems that mimic real-world scenarios, helping you understand and implement scalable and efficient system architectures. Behavioral Interview Preparation: The platform helps users articulate their experiences, challenges, and achievements while preparing for behavioral interviews, ensuring confidence in presenting your story. Interactive Environment: The platform simulates a real interview environment, allowing users to practice and refine their responses dynamically. Expertly Curated Content: All interview questions and exercises are designed and reviewed by industry experts, ensuring relevance and quality. Progress Tracking: BugFree.ai provides detailed feedback and progress tracking, enabling users to identify their strengths and areas for improvement. Personalized Feedback: The platform offers tailored feedback to help you refine your solutions and responses to both technical and behavioral questions. Mock Interviews: Engage in mock interviews to practice under realistic conditions and receive performance reviews. How It Works Sign Up: Create an account to access the features and resources available on BugFree.ai. Choose Your Path: Select from system design or behavioral interview modules based on your preparation needs. Practice Questions: Start solving system design problems or explore behavioral interview scenarios provided on the platform. Mock Interviews: Participate in mock interviews to simulate real-world interview experiences with expert feedback. Review Feedback and Progress: Review detailed performance feedback after each session to track your improvements over time. Refine and Repeat: Revisit areas of difficulty, refine your approach, and continue practicing until you feel confident. Benefits of BugFree AI Holistic Preparation: BugFree.ai covers both technical and non-technical aspects of interviews, ensuring well-rounded preparation. Industry-Relevant Content: Questions and scenarios are aligned with current industry trends and challenges. Confidence Building: Gain confidence with regular practice, mock interviews, and constructive feedback. Time-Efficient: Focused modules save time by targeting key areas of improvement directly. Career Advancement: Well-prepared candidates stand out in interviews, increasing their chances of landing their dream job. User-Friendly Interface: The platform is intuitive and easy to use, providing a seamless learning experience. Pricing BugFree.ai offers pricing plans tailored to different needs: Free Trial: A limited version to explore the platform and its features. Basic Plan: Ideal for beginners with access to core features. Pro Plan: Includes advanced system design problems, comprehensive behavioral modules, and mock interviews. Enterprise Plan: Designed for organizations seeking to train multiple candidates at scale with custom solutions. Specific pricing details are available upon signing up or contacting BugFree.ai. Review BugFree.ai has received positive feedback for its innovative approach to interview preparation. Users appreciate the combination of system design and behavioral modules, which cater to both technical and interpersonal skills. The personalized feedback and mock interview features have been highlighted as particularly useful. However, some users suggest adding more diverse problem sets to further enhance the learning experience. Overall, BugFree.ai is highly recommended for anyone looking to excel in their system design and behavioral interviews. Conclusion BugFree.ai is a comprehensive platform that equips users with the skills and confidence needed to excel in system design and behavioral interviews. Its unique approach, expert-curated content, and personalized feedback make it a valuable resource for job seekers and professionals aiming to advance their careers. With BugFree.ai, you can practice, refine, and succeed in your next big interview. Visit Website The post BugFree appeared first on AI Tools Directory | Browse & Find Best AI Tools.
-
Blogger changed their profile photo
-
Blogger joined the community
-
imageIn Bash version 4, associative arrays were introduced, and from that point, they solved my biggest problem with arrays in Bash—indexing. Associative arrays allow you to create key-value pairs, offering a more flexible way to handle data compared to indexed arrays. In simple terms, you can store and retrieve data using string keys, rather than numeric indices as in traditional indexed arrays. But before we begin, make sure you are running the bash version 4 or above by checking the bash version: echo $BASH_VERSIONIf you are running bash version 4 or above, you can access the associative array feature. Using Associative arrays in bash Before I walk you through the examples of using associative arrays, I would like to mention the key differences between Associative and indexed arrays: Feature Indexed Arrays Associative Arrays Index Type Numeric (e.g., 0, 1, 2) String (e.g., "name", "email") Declaration Syntax declare -a array_name declare -A array_name Access Syntax ${array_name[index]} ${array_name["key"]} Use Case Sequential or numeric data Key-value pair data Now, let's take a look at what you are going to learn in this tutorial on using Associative arrays: Declaring an Associative arrayAssigning values to an arrayAccessing values of an arrayIterating over an array's elements1. How to declare an Associative array in bashTo declare an associative array in bash, all you have to do is use the declare command with the -A flag along with the name of the array as shown here: declare -A Array_nameFor example, if I want to declare an associative array named LHB, then I would use the following command: declare -A LHB2. How to add elements to an Associative arrayThere are two ways you can add elements to an Associative array: You can either add elements after declaring an array or you can add elements while declaring an array. I will show you both. Adding elements after declaring an arrayThis is quite easy and recommended if you are getting started with bash scripting. In this method, you add elements to the already declared array one by one. To do so, you have to use the following syntax: my_array[key1]="value1"In my case, I have assigned two values using two key pairs to the LHB array: LHB[name]="Satoshi" LHB[age]="25"Adding elements while declaring an arrayIf you want to add elements while declaring the associative array itself, you can follow the given command syntax: declare -A my_array=( [key1]="value1" [key2]="value2" [key3]="value3" )For example, here, I created a new associated array and added three elements: declare -A myarray=( [Name]="Satoshi" [Age]="25" [email]="satoshi@xyz.com" )3. Create a read-only Associative arrayIf you want to create a read-only array (for some reason), you'd have to use the -r flag while creating an array: declare -rA my_array=( [key1]="value1" [key2]="value2" [key3]="value3" )Here, I created a read-only Associative array named MYarray: declare -rA MYarray=( [City]="Tokyo" [System]="Ubuntu" [email]="satoshi@xyz.com" )Now, if I try to add a new element to this array, it will throw an error saying "MYarray: read-only variable": 4. Print keys and values of an Associative arrayIf you want to print the value of a specific key (similar to printing the value of a specific indexed element), you can simply use the following syntax for that purpose: echo ${my_array[key1]}For example, if I want to print the value of email key from the myarray array, I would use the following: echo ${myarray[email]}Print the value of all keys and elements at onceThe method of printing all the keys and elements of an Associative array is mostly the same. To print all keys at once, use ${!my_array[@]} which will retrieve all the keys in the associative array: echo "Keys: ${!my_array[@]}"If I want to print all the keys of myarray, then I would use the following: echo "Keys: ${!myarray[@]}"On the other hand, if you want to print all the values of an Associative array, use ${my_array[@]} as shown here: echo "Values: ${my_array[@]}"To print values of the myarray, I used the below command: echo "Values: ${myarray[@]}"5. Find the Length of the Associative ArrayThe method for finding the length of the associative array is exactly the same as you do with the indexed arrays. You can use the ${#array_name[@]} syntax to find this count as shown here: echo "Length: ${#my_array[@]}"If I want to find a length of myarray array, then I would use the following: echo "Length: ${#myarray[@]}"6. Iterate over an Associative arrayIterating over an associative array allows you to process each key-value pair. In Bash, you can loop through: The keys using ${!array_name[@]}.The corresponding values using ${array_name[$key]}.This is useful for tasks like displaying data, modifying values, or performing computations. For example, here I wrote a simple for loop to print the keys and elements accordingly: for key in "${!myarray[@]}"; do echo "Key: $key, Value: ${myarray[$key]}" done7. Check if a key exists in the Associative arraySometimes, you need to verify whether a specific key exists in an associative array. Bash provides the -v operator for this purpose. Here, I wrote a simple if else script that uses the -v flag to check if a key exists in the myarray array: if [[ -v myarray["username"] ]]; then echo "Key 'username' exists" else echo "Key 'username' does not exist" fi8. Clear Associative arrayIf you want to remove specific keys from the associative array, then you can use the unset command along with a key you want to remove: unset my_array["key1"]For example, if I want to remove the email key from the myarray array, then I will use the following: unset myarray["email"]9. Delete the Associative arrayIf you want to delete the associative array, all you have to do is use the unset command along with the array name as shown here: unset my_arrayFor example, if I want to delete the myarray array, then I would use the following: unset myarrayWrapping Up...In this tutorial, I went through the basics of the associative array with multiple examples. I hope you will find this guide helpful. If you have any questions or suggestions, leave us a comment. https://linuxhandbook.com/content/images/2024/12/associative-array-bash.png