Jump to content

Blogger

Blog Bot
  • Joined

  • Last visited

    Never

Everything posted by Blogger

  1. By: Janus Atienza Wed, 06 Aug 2025 18:37:27 +0000 In today’s data-driven economy, access to timely, structured information offers a definitive competitive edge. Whether it’s tracking dynamic pricing in e-commerce, monitoring travel fare shifts, or extracting real-time insights from news and social media, automation lies at the heart of modern intelligence pipelines. A powerful trio—Bash, Cron, and rotating proxies—forms the backbone of a Unix-based data harvesting stack that is clean, scalable, and remarkably cost-efficient. The Unix Philosophy Meets Modern Data Scraping Unix systems are designed around a core philosophy: small, composable tools that do one thing well. This minimalist approach remains highly valuable in modern data workflows. Bash, the Bourne Again Shell, enables rapid scripting and automation. Cron schedules these Bash scripts to run at precise intervals, from every minute to monthly cycles. By integrating rotating proxies for anonymity and scale, developers can build a nimble infrastructure capable of harvesting vast volumes of structured data while avoiding detection. This methodology appeals to data operations within cybersecurity, financial modeling, private equity analytics, and market research. According to DataHorizzon Research, the global market for web scraping services reached $508 million in 2022 and is projected to surpass $1.39 billion by 2030—representing a steady compound annual growth rate (CAGR) of 13.3%. The growing demand reflects a marketplace in need of secure, scalable, and automated web data gathering. Inside a Unix-Native Scraping Pipeline Consider a market intelligence firm that wants to monitor 500 e-commerce websites for hourly changes in SKU availability, product descriptions, and pricing data. This operation might be built as follows: Bash scripts are used to retrieve content via tools like curl or wget. The data is then processed using Unix utilities such as awk, jq, and sed before being cleansed and exported to a database. Cron jobs schedule these scripts based on logic. High-demand SKUs might be queried every 15 minutes, while general product listings are updated hourly or nightly. Rotating proxies route each request through a dynamic IP pool, preventing server bans and distributing load intelligently across thousands of ephemeral user identities. A Fortune 500 retailer deploying this configuration reported monitoring over 1.2 million product pages daily. Utilizing more than 20,000 rotating IP addresses via a Cron-managed Bash framework, they saw a 12% efficiency gain in pricing strategy—directly contributing to improved margins. The Rising Importance of Rotating Proxies At large scale, preventing IP bans, CAPTCHAs, and rate limits becomes essential. Rotating proxies step in as a frontline defense. Unlike their static counterparts, rotating proxies change IP addresses for each request or session, effectively mimicking human browsing behavior to minimize detection. Research and Markets projects that the global proxy services market will exceed $1.2 billion by 2027. Today’s proxy providers offer smart APIs that easily integrate with Curl, Bash, and CI/CD pipelines, enabling seamless programmatic identity masking and data routing. Selecting the best rotating proxy service can significantly improve a scraping system’s reliability and throughput, enhancing both success rates and operational speed. For developers, using top-tier rotating proxies ensures a substantial reduction in error rates and better performance across the board. Cron: Small Tool, Big Impact While Bash scripts define the logic, Cron is the execution engine that keeps everything on track. Cron executes jobs with surgical precision, scheduling asynchronous and repetitive tasks down to the minute. It also supports grooming failed jobs, rerunning them when needed, and maintaining logs for audit and compliance purposes. Consider the example of a European fintech firm using Cron, Bash, and rotating proxies to pull real-time data from over 50 global sources. This included cryptocurrency pricing and public sentiment signals. With timely, structured data in hand, their machine learning models could deliver faster and smarter pricing recommendations, reducing decision latency by nearly 50%. Cost-Effective and Transparent Operations A major advantage of the Unix approach is cost savings. While commercial scraping platforms often carry steep subscription fees, deploying Bash and Cron on cloud services such as Hetzner or DigitalOcean can reduce operational expenses by up to 70%. This cost efficiency scales well when managing multiple data sources and endpoints. Transparency, too, is a strength. Every action—from initial HTTP request to data transformation—is logged, making it easier to debug pipelines, meet compliance requirements, and respect site-specific scraping policies like robots.txt. For industries bound by regulatory oversight, this level of visibility is critical. Real-World Deployments Academic Research: A life sciences lab used Cron-triggered shell scripts and proxy caches to scrape over 150,000 peer-reviewed articles monthly—at a cost 70% lower than commercial API alternatives. Travel Aggregation: A global travel aggregator deployed its data collection fleet on a Kubernetes-based infrastructure using Cron for job scheduling. By aligning scraping frequencies with local flight departure times, they reduced latency by 40%, significantly improving fare tracking accuracy. Challenges and Growing Complexity Despite its advantages, the Unix-based data scraping stack has its limitations. Bash scripts struggle with websites that rely heavily on JavaScript rendering. For such cases, more advanced tools like Puppeteer or Selenium—often built on JavaScript or Python—are required to simulate full browser sessions. As deployments scale across multiple instances or cloud services, managing distributed Cron jobs grows more complex. Efficient error handling, rate limit mitigation, and robust parsing often demand auxiliary Python tools or even natural language processing frameworks like spaCy or Hugging Face’s transformer models. Unix Scraping in the Modern Age Modern scraping architectures continue to evolve. Developers are now incorporating cloud-native trigger services like AWS EventBridge, serverless runners, and even AI-assisted data transformations into Unix-based stacks. The appeal is clear: more control over costs, logic, compliance, and execution. Between 2021 and 2023, GitHub repositories tagged with “bash-scripting” increased by 35%, signaling renewed interest in lightweight, script-based data engineering. For engineers familiar with the Unix ecosystem, this resurgence emphasizes the platform’s utility, transparency, and performance. Conclusion: From Script to Strategy Real-time access to web data has become mission-critical—from tracking prices to modeling markets. With Bash driving logic, Cron delivering precision, and rotating proxies ensuring anonymity, the Unix-based stack delivers a quiet yet powerful advantage. For organizations that prioritize customization, cost control, and traceability, the Unix way offers more than code—it offers strategic flexibility. Or, as one CTO aptly put it, “We didn’t just scrape the web—we scraped our way into a competitive advantage.” The post Bash, Cron and Rotating Proxies: Automating Large-Scale Web Data Harvesting the Unix Way appeared first on Unixmen.
  2. by: Blake Lundquist Wed, 06 Aug 2025 13:39:06 +0000 For a period in the 2010s, parallax was a guaranteed way to make your website “cool”. Indeed, Chris Coyier was writing about it as far back as 2008. For those unfamiliar with the concept, parallax is a pattern in which different elements of a webpage move at varying speeds as the user scrolls, creating a three-dimensional, layered appearance. A true parallax effect was once only achievable using JavaScript. However, scroll-driven animations have now given us a CSS-only solution, which is free from the main-thread blocking that can plague JavaScript animations. Parallax may have become a little cliché, but I think it’s worth revisiting with this new CSS feature. Note: Scroll-driven animations are available on Chrome, Edge, Opera, and Firefox (behind a feature flag) at the time of writing. Use a supported browser when following this tutorial. Starting code In this example, we will apply parallax animations to the background and icons within the three “hero” sections of a universe-themed webpage. We’ll start with some lightly styled markup featuring alternating hero and text sections while including some space-related nonsense as placeholder content. CodePen Embed Fallback Adding initial animations Let’s add an animation to the background pattern within each hero section to modify the background position. @keyframes parallax { from { background-position: bottom 0px center; } to { background-position: bottom -400px center; } } section.hero { /* previous code */ + animation: parallax 3s linear; } Here we use the keyframes CSS rule to create a start and end position for the background. Then we attach this animation to each of our hero sections using the animation property. By default, CSS animations are duration-based and run when the specified selector is loaded in the DOM. If you refresh your browser, you will see the animation running for three seconds as soon as the page loads. We do not want our animation to be triggered immediately. Instead, we intend to use the page’s scroll position as a reference to calculate the animation’s progress. Scroll-driven animations provide two new animation timeline CSS functions. These additions, view() and scroll(), tell the browser what to reference when calculating the progress of a CSS animation. We will use the view() function later, but for now, let’s focus on scroll(). The scroll progress timeline couples the progression of an animation to the user’s scroll position within a scroll container. Parameters can be included to change the scroll axis and container element, but these are not necessary for our implementation. Let’s use a scroll progress timeline for our animation: section.hero { /* previous code */ - animation: parallax 3s linear; + animation: parallax linear; + animation-timeline: scroll(); } If you refresh the page, you will notice that as you scroll down, the position of the background of each hero section also changes. If you scroll back up, the animation reverses. As a bonus, this CSS animation is handled off the main thread and thus is not subject to blocking by any JavaScript that may be running. Using the view progress timeline Now let’s add a new parallax layer by animating the header text and icons within each hero section. This way, the background patterns, headers, and main page content will all appear to scroll at different speeds. We will initially use the scroll() CSS function for the animation timeline here as well. @keyframes float { from { top: 25%; } to { top: 50%; } } .hero-content { /* previous code */ + position: absolute; + top: 25%; + animation: float linear; + animation-timeline: scroll(); } That’s not quite right. The animation for the sections further down the page is nearly done by the time they come into view. Luckily, the view animation timeline solves this problem. By setting the animation-timeline property to view(), our animation progresses based on the position of the subject within the scrollport, which is the part of the container that is visible when scrolling. Like the scroll animation timeline, scrolling in reverse will also reverse the animation. Let’s try changing our animation timeline property for the hero text: .hero-content { /* previous code */ - animation-timeline: scroll(); + animation-timeline: view(); } That looks pretty good, but there is a problem with the header content flashing into the view when scrolling back up the document. This is because the view timeline is calculated based on the original, pre-animation positioning of the subject element. We can solve this by adding an inset parameter to the view() function. This adjusts the size of the container in which the animation will take place. According to MDN’s documentation, the “inset is used to determine whether the element is in view which determines the length of the animation timeline. In other words, the animation lasts as long as the element is in the inset-adjusted view.” So, by using a negative value, we make the container larger than the window and trigger the animation to start a little before and end a little after the subject is visible. This accounts for the fact that the subject moves during the animation. - animation-timeline: view(); + animation-timeline: view(-100px); Now both the text and background animate smoothly at different speeds. CodePen Embed Fallback Adjusting animations using animation ranges So far, we have employed both scroll and view progress timelines. Let’s look at another way to adjust the start and end timing of the animations using the animation-range property. It can be used to modify where along the timeline the animation will start and end. We’ll start by adding a view() timeline animation to the #spaceship emoji: @keyframes launch { from { transform: translate(-100px, 200px); } to { transform: translate(100px, -100px); } } #spaceship { animation: launch; animation-timeline: view(); } Again, we see the emoji returning to its 0% position once its original unanimated position is outside of the scrollport. As discussed before, animations are based on the original pre-animation position of the subject. Previously, we solved this by adding an inset parameter to the view() function. We can also adjust the animation range and tell our animation to continue beyond 100% of the animation timeline without having to manipulate the inset of the view timeline itself. #spaceship { animation: launch; animation-timeline: view(); + animation-range: 0% 120%; } Now the animation continues until we have scrolled an extra 20% beyond the calculated scroll timeline’s normal endpoint. Let’s say that we want to add an animation to the #comet emoji, but we don’t want it to start animating until it has passed 4rem from the bottom of the scrollport: @keyframes rotate { from { transform: rotate(0deg) translateX(100px); } to { transform: rotate(-70deg) translateX(0px); } } #comet { animation: rotate linear; transform-origin: center 125px; animation-timeline: view(); animation-range: 4rem 120%; } Here we see the “delayed” animation in action: We can also combine animation ranges to run completely different animations at different points within the same timeline! Let’s illustrate this by combining animation ranges for the #satellite icon at the top of the page. The result is that the first animation runs until the icon passes 80% of the scrollport, then the second animation takes over for the final 20%. @keyframes orbit-in { 0% { transform: rotate(200deg); } 100% { transform: rotate(0deg); } } @keyframes orbit-out { 0% { transform: translate(0px, 0px); } 100% { transform: translate(-50px, -15px); } } #satellite { animation: orbit-in linear, orbit-out ease; animation-timeline: view(); animation-range: 0% 80%, 80% 110%; } Fallbacks and accessibility Our webpage features numerous moving elements that may cause discomfort for some users. Let’s consider accessibility for motion sensitivities and incorporate the prefers-reduced-motion CSS media feature. There are two possible values: no-preference, and reduce. If we want to fine-tune the webpage with animations disabled by default and then enhance each selector with animations and associated styles, then we can use no-preference to enable them. @media (prefers-reduced-motion: no-preference) { .my-selector { position: relative; top: 25%; animation: cool-animation linear; animation-timeline: scroll(); } } For us, however, the webpage content and images will still all be visible if we disable all animations simultaneously. This can be done concisely using the reduce option. It’s important to note that this sort of blanket approach works for our situation, but you should always consider the impact on your specific users when implementing accessibility features. @media (prefers-reduced-motion: reduce) { .my-selector { animation: none !important; } } In addition to considering accessibility, we should also take into account that scroll-driven animations are not supported by all browsers at the time of writing. If we care a lot about users seeing our animations, we can add a polyfill (direct link) to extend this functionality to currently unsupported browsers. This, however, will force the animation to run on the main thread. Alternatively, we could decide that performance is important enough to skip the animations on unsupported browsers, thereby keeping the main thread clear. In this case, we can use the @supports selector and include the styles only on supported browsers. Here is the final code with everything, including the polyfill and reduced motion fallback: CodePen Embed Fallback Conclusion There we go, we just re-created a classic web effect with scroll-driven animations using scroll and view progress timelines. We also discussed some of the parameters that can be used to adjust animation behavior. Whether or not parallax is your thing, I like the idea that we can use a modern approach that is capable of doing what we could before… only better with a dash of progressive enhancement. More information Unleash the Power of Scroll-Driven Animations animation-timeline (CSS-Tricks) CSS scroll-driven animations (MDN) Scroll-driven Animations Demo Site (Bramus) Bringing Back Parallax With Scroll-Driven CSS Animations originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  3. by: Abhishek Prakash Tue, 05 Aug 2025 15:18:21 GMT I am helping my child learn coding. No, not the new born, but "my favorite child", Ushika, who is now four years old. I have to call her "my favorite child" to avoid any backlash we could get as we divide our attention between two children now. My daughter has a dedicated Raspberry Pi set up for almost a year now. She recognizes the typical Raspberry Pi wallpaper, and whenever I have a Pi connected to a monitor, she thinks it's her computer. After her initial introduction with computers, I am gradually helping her explore coding programs specially designed for children as young as five years old. 💡Please know that coding for young children is vastly different from the usual programming you know. It is block-based and interactive. It's like playing with Lego but on computers. Also, I am not forcing her to do anything, I am just encouraging her to explore computers. And yes, teaching coding to kids is the new normal.TLDR; I setup a Raspberry Pi with GCompris and that helped her get familiar with mouse and keyboard and the idea of a proper computerCodeMonkey online platform helped her get into block-based coding on her computer as well as her tablet. Simplest choice for parents with or without coding knowledge.Microbit combines the element of block-based software programming with hardware. I think prior block-based coding knowledge helped here. Need some hardware interaction and not everyone would be comfortable with it. The next step is to get more interactive with a Raspberry Pi kit and move on to higher level block-based courses on CodeMonkey. GCompris: Free education games packageI started Ushika's computing journey with a Raspberry Pi running Gcompris. GCompris is a free and open‑source educational software suite created as part of the GNU project, designed for children aged 2 to 10. It provides more than 120 interactive and game‑based activities to help kids develop foundational skills in a fun environment. But GCompris is not really for coding. It's main purpose is to get the young children comfortable with the idea of using. Ushika learned mouse control and typing on keyboard with these games. Additionally, puzzle games that are enjoyable can assist in the development of logical reasoning. GCompris is cross-platform and you can use it on Linux, Windows, macOS and Android. GCompris📋If you have a computer, use GCOmpris to make them familiar with mouse and keyboard.CodeMonkey: Block based codingIt didn't take Ushika long enough to explore GCompris completely, at least the games that are of her level or interest. Since she could use the mouse and keyboard effortlessly now, I wanted her to level up her skill. The next move was to get her into coding. I looked for several online coding platforms for kids and decided to try CodeMonkey. It was a good decision. CodeMonkey is a web-based coding platform specifically designed for children aged 5 and above. In fact, their courses offer a full curriculum for grades K-8 (kindergarten to class 8). There are two types of coding courses: Block based: Suitable for young children who cannot read and write properlyText based: Courses based on CoffeeScript and Python programming languageLater stages also have advanced courses where children can build their own AI chatbots and games. My 4-year old has not reached that stage yet (or so I think). I used CodeMonkey Junior program as a parent on the desktop. I wanted Ushika to use mouse instead of a touchscreen. Also, the mentality is that tab is for fun and watching cartoons, and it can easily be distracting. Perhaps that's one of the reasons why CodeMonkey doesn't have mobile apps. CodeMonkey can also be used by teachers in schoolsI sit with Ushika in all the 'coding sessions'. While the puzzles are easy, the young ones still need some guidance from a grownup. If she was 7 or 8, things would have been easier for her to go on her own as there are video solutions to help students out. The CodeMonkey Junior teaches sequencing, loops, conditional loops and procedures. Here's a sample video of a lesson on loop: 0:00 /0:18 1× Game teaching the concept of loops to children The next beginner's course is Beaver Achiever and that has a different kind of block coding. This type of block coding got popular thanks to the Scratch project, which I find overwhelming as things are not streamlines for focused learning there. I do use it differently and I'll discuss that in the next section. Here's the thing with young children. They learn with repetition, and they actually (often) love to repeat. Ushika likes repeating the sequence and loop courses. She took her time the first time and now it's just 15-20 minutes or so to complete the 10-12 chapters. By the way, there are also monthly challenges in CodeMonkey that justifies the continued subscription. Explore CodeMonkey📋If you don't mind spending a little, CodeMonkey is an excellent platform to start the coding journey with interactive courses. Cutting down on a streaming service for your child's education is not a bad deal. It is also a good way to spend some creative time with your young child(ren).Microbit: Combine software with hardwareI'll be honest that I was not aware of the awesomeness of this tiny gadget. For me, things started with Raspberry Pi and that's not bad. But Microbit is a simpler device that bridges the gap between software and hardware. It is a 2 inch device with a few physical push buttons, speaker, LEDs for information display, and a touch sensor. Remember I discussed Scratch in the previous section? Microbit with Scratch allows kids to write simple programs that are reflected on the Microbit device. For example, Ushika worked on a block-based program that displays Mumma when left button is pushed and Papa when right button is pushed and if both buttons are pushed together, it flashes Ushika. This was one of her proudest achievements before she lost her Microbit. Now, flashing the Scratch program to Microbit should be a matter of clicks but I find it finicky, at least on Raspberry Pi and Linux. It sometimes works, while Microbit is not even recognized at other times. Still, this is a nifty gadget to try if you are familiar with Raspberry Pi and other single board computers. For absolutely non-technical parents, this could be a little tricky. Explore Microbit📋If you can spend $20 and are not afraid of naked electronic device, Microbit is worth a try.Raspberry Pi kits: The next levelThere are numerous Raspberry Pi kits out there. These kits provide compact box with a number of built-in accessories like fans, RGB lights, sensors etc. This way, children can write/run programs that directly interact with hardware. Basically, a more extended version of Microbit. I have two such kits from Elecrow that were sent to me for review. The Crow Pi 3 kit is more suitable for older kids. The smaller, Pico Kit, is more suitable for quick experiment. However, it needs Arduino IDE and you have to download individual programs and flash them to the Raspberry Pi Pico. This could be challenging for parents who are not so technically involved. These kits require more involvement from parents specially for younger kids. However, older kids should be able to follow along on their own, in my opinion. This is the next step for Ushika at this stage. I am figuring out ways she could use these 'advanced' kits on her own but it is going to take some time and I am not rushing. Explore Elecrow Pico KitKids love challenges, give them oneUshika likes interacting with these "games". Yes, it is all game for her, something different and fun to do. And that should be the spirit with anything children learn. They should enjoy the learning process. These programming sessions are one of the several activities she likes to do with me. Playing on the PlayStation console, making castles with magnetic blocks, riding bicycles, and playing badminton are some other fun ways we spend time together. Microbit and Raspberry Pi kits can be challenging for parents, too. If you have young children at home, you should try GCompris and if you can, opt for a platform like CodeMonkey. This way, you let them take their first step towards computing and coding. And that is my experience and experiment so far. If you have your own experience and views on the topic, please share in the comments.
  4. by: Ani Tue, 05 Aug 2025 09:18:37 +0000 The goal is to leave work each day a little better than when you walked in. Improvement does not have to come from big courses or certifications. It can come from a single good conversation with a colleague, or from reflecting on something you struggled with that day. About meI am Maaret Pyhäjärvi, a tester, a programmer, and currently the Director of Testing Services at CGI. My role is all about ensuring that we deliver high-quality testing that truly supports our customers’ needs. I have been in the industry for 28 years, and my path into testing was anything but planned. I originally studied both the Greek language and computer science, and someone saw an unusual opportunity in that combination—testing the Greek version of Microsoft Office. That unexpected match launched my career. Working at CGII absolutely love my job at CGI. I joined because I wanted to contribute to meaningful, locally relevant software in Finland, and CGI delivers on that. The people are welcoming, the work versatile, and even in remote settings, the support network is strong and well-established. My study path in the IT field When I finished high school, I was not motivated by computers at all. I actually planned to become a chemical engineer. But that dream quickly changed after doctors warned me that my allergies could make it a life-threatening career choice. I had to rethink my future. By pure chance, I ended up applying to study mechanical engineering. A friend needed a ride to the technical university to register for the entrance exams, and since I had a car, I gave her a ride—and decided to take the exam myself. Surprisingly, I got in. I gave it a shot, took some courses, but soon realized that mechanical engineering just did not spark my interest. What did inspire me, though, was the university itself. The social atmosphere, the sense of community—it was amazing. I started exploring other fields and became intrigued by computer science. That curiosity grew, and eventually, I decided to pursue a degree in Computer Science. I ended up taking a course in Assembly language—an incredibly low-level form of programming. Surprisingly, I found it fun, and this is what I wanted to study. I absolutely loved the idea of moving some blocks with a set of commands, and then building languages on top of that. From Assembly, studies guided me into Lisp, a tricky language, but something about its complexity really drew me in. So it was never about a particular technology or a particular language; it was about all of them. What excited me was learning how all the pieces fit together, and more importantly, where they could break. Testing meant diving deep into systems, finding the blind spots, and the vulnerabilities. Others might know how the technology works, but as a tester, you learn how it fails, and that’s where the real insight lies. A typical day at my jobHonestly, no two days are ever the same. I don’t really have a “typical” day; each one brings something new. Take today, for example: I’ll be spending the rest of it writing a forty-page document that outlines our most modern testing approach—capturing how we’re evolving and pushing things forward. The secret to staying up to date in ITI learn from a mix of sources, and I like to blend them. I actually google much less these days. Instead, I have conversations with ChatGPT. AI is a better conversationalist resource in many ways. When I use ChatGPT, it’s not just about getting the “right” answers, but rather, I’m looking for an external imagination that stretches what’s already in me. I use it to stretch my own thinking. As a tester, I’m not just seeking confirmations. I’m especially interested in ideas that challenge what I believe or have experienced. These help me make informed, intentional decisions. I stay in control, but AI acts as an external imagination that feeds my own. Another major learning source for me is social software engineering. I discovered it about a decade ago, and it was a game-changer. Practices like ensemble programming and ensemble testing—where a whole group works together at one computer gave me access to something incredibly valuable: the “unknown unknowns.” Another key way I keep learning is through conferences. I attend a lot of them—not just as a listener, but as a speaker too. So far, I’ve delivered around 570 talks in my career, and I’ve probably listened to over 5,700 sessions. I absorb pieces and ideas, and then I have conversations about those. That’s when learning really happens for me. Speaking at conferences has also become a major source of growth. After presenting, people often come up to me to talk about what I shared. These moments spark new ideas and connections. And sometimes, when I ask a question during a session, someone in the room already knows the answer and is eager to share it. That kind of instant learning is powerful, far faster and richer than anything I could do alone. Overcoming the fear of public speaking—570 talks laterWhat might surprise people is that I did not start as a confident speaker. I used to be terrified of public speaking. I would faint—or feel like I might—just introducing myself. But I realized something important: this fear was a limitation I could not live with. I did not want it to define me. So I made a decision—I would work through it. It wasn’t a sudden transformation. I practiced. I persisted. I kept showing up, even when it was uncomfortable. And over time, something changed. With every talk I gave, I built confidence. I reflected, improved, and most importantly, I kept going. Eventually, what once felt like a crippling fear became one of my greatest strengths. Maaret Pyhäjärvi, Director of Testing Services, CGI How did I convince myself to overcome my fear?I owe a lot to my mother. She taught me one of the most important life lessons: whatever you struggle with, you can improve. It’s not magic—it’s persistence. That stuck with me. She taught me that the trick is to face what makes you uncomfortable and keep going until it doesn’t feel uncomfortable anymore. Overcoming my fear of public speaking was more about doing the hard thing over and over again until it became natural. You do not need to master everything at once. You just need to start stretching in the right direction. This mindset has helped me at work in a very practical way. In tech, we are constantly surrounded by new tools, languages, and frameworks. It’s impossible to know everything upfront. But I have learned to say: “I do not know this yet, but I know how to learn.” I do not wait to master something before I use it. If I understand even half of it, I will start applying what I can. The power of continuous learningThis approach to learning, little by little, every single day, piles up. It’s like a river made from countless small streams. Each moment of growth may feel small, but over time, it builds something powerful. I often say, “The goal is to leave work each day a little better than when you walked in.” Improvement does not have to come from big courses or certifications. It can come from a single good conversation with a colleague, or from reflecting on something you struggled with that day. The best tip I can give If I had just one piece of advice to offer, it would be this: start speaking your mind. The thoughts and ideas inside your head? No one will ever know them unless you choose to share them. It does not have to be perfect. Just start. You can begin by writing. Blogging is a powerful tool. I’ve been doing it for years, and recently, I checked, my blog has reached 1.2 million views. Another powerful way to share your voice is public speaking. The size of the audience does not matter at first-just begin. Over time, those audiences grow. And more importantly, they talk back. They respond, they question, they share their own insights. That interaction, that conversation, it’s how real growth happens. Speaking your mind creates collaboration. How to improve oneself daily?It`s simple, really: you recognize something difficult. Then you do it. You only learn to do things you don’t know by doing them. My favorite TED talk and quoteMy favorite TED talk is Where are the baby dinosaurs? by Jack Horner, and the quote that resonates with me is “Don’t ask forgiveness, radiate intent” by Elizabeth Ayer. The post Role Model Blog: Maaret Pyhäjärvi, CGI first appeared on Women in Tech Finland.
  5. by: Sourav Rudra Tue, 05 Aug 2025 06:12:10 GMT As people who live in the information age, we often have to give up our privacy to use collaboration tools that have no business having access to such invasive levels of PII, all in the name of "serving relevant ads". Yep, I am talking about the offerings from Big Tech. They are notoriously apathetic towards their user base when it comes to harvesting user data, not even leaving family photos out of their data-hungry AI models. That is where privacy-first tools like Nextcloud and CryptPad come in, offering users an experience where their data is not traded, allowing them to retain ownership over their data while limiting exposure to Big Tech's tracking-driven ecosystems. If you find it confusing to select between the two, then allow me to break it down for you in this article. Collaboration FeaturesNextcloud and CryptPad real-time collaboration on a document. Both Nextcloud and CryptPad support real-time collaboration on documents, enabling multiple users to edit the same file simultaneously. While this is a common feature for both platforms, the surrounding ecosystem of collaboration tools varies. Nextcloud has a more expansive suite of collaboration tools that are tailored for team workflows, including tools like Nextcloud Talk for secure audio/video calls and Nextcloud Groupware, which bundles calendars, contacts, mail, and task management into a single hub. In comparison, CryptPad takes a more streamlined approach, focusing on privacy-first collaboration, with apps like Whiteboard for visual brainstorming, Kanban boards for project management, and an office suite for spreadsheets, presentations, and code edits. Security & EncryptionThe encryption features on Nextcloud and CryptPad are remarkable. By default, Nextcloud uses TLS encryption during the transfer of data to secure communications between the client and server. Once stored, the data is protected by server-side encryption (if enabled), which encrypts files at rest on the server using server-managed keys. There is also the option to enable end-to-end encryption (E2EE) for specific folders, which encrypts content on the client side before it reaches the server, ensuring that only the owner and intended recipients can access the data. CryptPad, on the other hand, has a more focused approach when it comes to handling security. It enforces E2EE by default, meaning all data is encrypted directly in your browser before being sent to the server. This zero-knowledge design ensures that even the service provider (e.g., the cloud host) cannot access your content. Plus, I noticed that signing up for a CryptPad account (for the Registered tier) doesn’t require you to provide an email address, allowing quick anonymous access. In contrast, Nextcloud’s cloud hosting options ask for an email during registration. File Storage CapabilitiesCloud drive capabilities of Nextcloud and CryptPad. This is where Nextcloud excels by offering a full-fledged cloud storage solution that rivals the Big Tech platforms. It provides file syncing, versioning, sharing controls, and integration with external storage services, making it a strong alternative for users who want ownership over their data. Notably, Nextcloud made headlines recently when Google blocked key features in its Android app, showing just how apathetic Big Tech can be. Thankfully, the backlash was loud enough that Google backed down. Compared to that, CryptPad looks like a modest offering, primarily focused on storing and working with documents created within its office apps. While it does support uploading other file types, its functionality is best suited for secure, collaborative document editing rather than general-purpose file storage. Cloud Hosting or Self HostingThe cloud hosting pricing charts for Nextcloud and CryptPad. Both platforms offer flexible deployment options. Nextcloud provides three enterprise tiers: Standard, Premium, and Ultimate, with pricing ranging from €67.89 to €195 per user per year. These plans cater to organizations of varying sizes and needs, offering benefits like extended support and advanced security features. For CryptPad, the pricing structure is much simpler and affordable, especially for individual users. There's Guest, which costs €0 and provides access to core features but without file uploads. Then there's Registered, which is also free and offers the same core features as Guest but adds file uploads and 1 GB of free cloud storage into the mix. Finally, there's Premium, which requires an email address for registration and costs between €5 and €15 per month. This plan unlocks additional storage, customer support, and directly supports the financial sustainability of the CryptPad project. Self-hosting is doable for both Nextcloud and CryptPad. You could also self-host both Nextcloud and CryptPad, giving you full control over your data and cloud infrastructure. For Nextcloud, you can go through the documentation for at home and server to see which fits your use case. Similarly, for CryptPad, you can refer to the official documentation to get started. Closing ThoughtsIn the end, the final decision rests on you, the user. If your priority is a no-nonsense, easy to access, encrypted collaboration tool, then CryptPad is the clear winner here with its zero-knowledge architecture, decent real-time collaboration, and anonymous access. On the other hand, if you're looking for a scalable platform that can handle enterprise workflows, file storage, and team communication under one roof, then Nextcloud is the way to go.
  6. By: Janus Atienza Mon, 04 Aug 2025 17:09:44 +0000 In today’s interconnected world, businesses are increasingly reliant on digital systems to operate efficiently. However, with this reliance comes an ever-growing threat: cyberattacks. One of the most common and potentially devastating forms of these attacks is account hacking. Recovering from a hacked business account can be complex and costly, making prevention the best defense. Let’s explore how businesses can protect their accounts and the role technologies like network diodes play in securing sensitive information. Signs Your Business Account May Be Compromised Before diving into prevention, it’s crucial to recognize the warning signs of a hacked account. These include: Unusual activity: Logins from unfamiliar locations or devices. Unintended actions: Emails sent without authorization or unexpected transactions. Locked accounts: Sudden inability to access critical accounts. Security alerts: Notifications about password changes or unauthorized access. If you notice these signs, immediate action is necessary to mitigate damage. Steps to Recover a Hacked Business Account Act Immediately: Disconnect the compromised device from the network to prevent further data theft. Reset Passwords: Use a secure device to change passwords for all affected accounts. Employ strong, unique passwords. Enable Two-Factor Authentication (2FA): Add an extra layer of protection to ensure that even if login credentials are stolen, the attacker cannot access the account. Investigate the Breach: Identify how the attack occurred and what data was compromised to prevent recurrence. Notify Stakeholders: Inform clients, partners, and employees about the breach if their data might be affected. Transparency helps maintain trust. Report the Incident: File a report with the appropriate authorities or regulatory bodies if sensitive data was exposed. Prevention: The Best Defense Recovering from a breach is resource-intensive, often resulting in financial losses and reputational damage. Implementing preventive measures is far more effective. Here’s how businesses can proactively secure their accounts: Employing Network Diodes Network diodes are powerful tools for ensuring data security. These devices enforce one-way data transfer, making it impossible for hackers to send malicious data back into the network. Here’s how they work: Unidirectional Communication: Data flows in one direction only, from the secure side to the external network. Isolation of Critical Systems: Protects sensitive assets from being accessed remotely by attackers. Applications in Business: Ideal for transmitting financial data, sensitive communications, or proprietary information. By integrating network diodes, businesses can safeguard their information while maintaining operational efficiency. Regular Security Audits Conduct routine evaluations of your cybersecurity protocols to identify vulnerabilities. Educate Employees Human error remains one of the weakest links in cybersecurity. Regular training on phishing, password management, and secure browsing is essential. Leverage Advanced Authentication Methods Beyond 2FA, consider biometrics or hardware security keys to add further protection. Backup Critical Data Regularly back up data to an isolated location. In the event of an attack, this ensures continuity without yielding to ransom demands. The Role of Cybersecurity Policies A well-drafted cybersecurity policy is the cornerstone of preventive efforts. Ensure that policies include: Access Controls: Limit access to sensitive systems and data based on roles. Incident Response Plans: Define clear steps for mitigating breaches. Software Updates: Mandate timely updates to close security loopholes. Truths and Myths about Linux machines. How Reliable are Linux Systems. Are they vulnerable? Truth: While Linux has a reputation for being secure, it is not immune. Many attacks target misconfigured services, weak SSH credentials, outdated software, and publicly exposed daemons (e.g., Apache, MySQL, OpenSSH). Example: Attackers brute-force SSH logins or exploit outdated WordPress installations. Compromised Business Accounts Are High-Value Targets Especially on Linux servers, root or sudo-capable users are a jackpot. Attackers can: Install cryptominers, backdoors, or keyloggers. Harvest SSH keys, database access, or API credentials. Laterally move to other systems via shared keys or VPN credentials. Logs Can Be Modified or Deleted A common myth is that “you can always find the trail in /var/log.” Truth: Skilled attackers clear or modify logs, or use rootkits to hide traces. Rootkits and Kernel-Level Malware Exist Tools like LKM rootkits can make malicious processes, files, or ports invisible. Hard to detect with normal tools (ps, netstat, ls). Business Email Compromise (BEC) Often Starts from a Linux Box Especially if that box hosts email servers, webmail portals, or SMTP relays. Myths “Only Windows Gets Viruses” False: Linux malware exists and is growing. Examples include: Mirai, Gafgyt, Xor.DDoS (IoT/Linux botnets). EvilGnome, Turla, HiddenWasp (targeted malware). “My Server Has No GUI, So It’s Safe” False: Headless servers are often more vulnerable because they: Are exposed to the internet. May run outdated command-line-only services (e.g., nginx, Postfix). “SELinux/AppArmor Guarantees Safety” False: These tools reduce damage, but are often: Disabled, misconfigured, or bypassed via privilege escalation. “Strong Passwords Are Enough” False: Even with strong passwords, if SSH keys, sudo privileges, or known vulnerabilities (e.g., sudoedit, dirty pipe) are present — you’re still at risk. “If There’s No Traffic Spike, Everything’s Fine” False: Many modern attacks are stealthy: Use low-and-slow exfiltration. Set up reverse shells to await commands. Operate during off-hours. Conclusion While recovering from a hacked business account is possible, the time, money, and resources spent on remediation underscore the value of prevention. Technologies like network diodes, combined with robust security policies and employee education, can significantly reduce the risk of cyberattacks. By taking proactive steps today, businesses can secure their digital assets and focus on what matters most: growth and innovation. Taking preventive measures is not just a smart choice—it’s a necessary one in today’s digital landscape. Don’t wait until it’s too late; prioritize cybersecurity now. The post Recovering Business Accounts That Have Been Hacked: Best Approach Is Prevention appeared first on Unixmen.
  7. by: Chris Coyier Mon, 04 Aug 2025 17:00:40 +0000 Scroll-Driven Animations are a bit closer to usable now that Safari has them in Technical Preview and Firefox has them behind a flag. Chrome has released them. Saron Yitbarek has been blogging about it for Apple, and it’s nice to see. Apple hasn’t ever been super big in the “we make educational content for web development in general” department but maybe that’s changing. I like how Saron lays scroll-driven animations out: What I like about this framing is that it underscores that the target and the timeline can be totally different elements. They also have no particular requirements for where they live in the DOM. A rando <div> that scrolls can be assigned a custom timeline name that some other totally rando <section> elsewhere animates to. It’s just a freeing thought. There is also this important thing to know about scroll-driven animations: there are two kinds. One of them is animation-timeline: scroll(); where the timeline is simply how far the element has scrolled and the other is animation-timeline: view(); where the timeline is all about the visibility of the element in the viewport (browser window). The later is maybe a little more fun and interesting in general, for fancy designs. And it’s got a bit more you need to learn about, like animation-range. Again Saron Yitbarek has a new article out about it: A cheatsheet of animation-ranges for your next scroll-driven animation. There are a bunch of keywords for it, thankfully, and Saron explains how they work. Me, I finally got a chance to look at Bramus’ If View Transitions and Scroll-Driven Animations had a baby, which is a bit of a mind blowing concept to me. They feel like very different ideas, but, then again, a View Transition is essentially a way to define and animation you want to happen, and scroll-driven animations could be the way the view transition progresses. It’s still weird: like how could a scroll position affect the state of the DOM?? You should probably watch Bramus explain in the video/slides linked above, but I couldn’t resist poking at it myself on a recent stream. Are you a little leary of using scroll-driven animations because of the browser support? Well Cyd Stumpel has some things to say about that in Two approaches to fallback CSS scroll driven animations. They aren’t in Interop 2025 or anything, but with all three major engines having at least flagged implementations, it shouldn’t be too long. But there will always be a long tail of people on browsers that don’t support them, so what then? Cyd nicely explains two approaches: progressive enhancement and graceful degradation. These concepts are often hard to explain the difference between, so this is a great example. Those two links just now? Pens that show off the concepts perfectly. I’ll leave you with Amit Sheen going buck wild on sticky headers that do fun things with scroll-driven animations, like the text in the header typing itself out (and then back).
  8. By: Janus Atienza Mon, 04 Aug 2025 16:09:06 +0000 cybersecurity concept, user privacy security and encryption, secure internet access Future technology and cybernetics, screen padlock. In today’s interconnected world, businesses are increasingly reliant on digital systems to operate efficiently. However, with this reliance comes an ever-growing threat: cyberattacks. One of the most common and potentially devastating forms of these attacks is account hacking. Recovering from a hacked business account can be complex and costly, making prevention the best defense. Let’s explore how businesses can protect their accounts and the role technologies like network diodes play in securing sensitive information. Signs Your Business Account May Be Compromised Before diving into prevention, it’s crucial to recognize the warning signs of a hacked account. These include: Unusual activity: Logins from unfamiliar locations or devices. Unintended actions: Emails sent without authorization or unexpected transactions. Locked accounts: Sudden inability to access critical accounts. Security alerts: Notifications about password changes or unauthorized access. If you notice these signs, immediate action is necessary to mitigate damage. Steps to Recover a Hacked Business Account Act Immediately: Disconnect the compromised device from the network to prevent further data theft. Reset Passwords: Use a secure device to change passwords for all affected accounts. Employ strong, unique passwords. Enable Two-Factor Authentication (2FA): Add an extra layer of protection to ensure that even if login credentials are stolen, the attacker cannot access the account. Investigate the Breach: Identify how the attack occurred and what data was compromised to prevent recurrence. Notify Stakeholders: Inform clients, partners, and employees about the breach if their data might be affected. Transparency helps maintain trust. Report the Incident: File a report with the appropriate authorities or regulatory bodies if sensitive data was exposed. Prevention: The Best Defense Recovering from a breach is resource-intensive, often resulting in financial losses and reputational damage. Implementing preventive measures is far more effective. Here’s how businesses can proactively secure their accounts: Employing Network Diodes Network diodes are powerful tools for ensuring data security. These devices enforce one-way data transfer, making it impossible for hackers to send malicious data back into the network. Here’s how they work: Unidirectional Communication: Data flows in one direction only, from the secure side to the external network. Isolation of Critical Systems: Protects sensitive assets from being accessed remotely by attackers. Applications in Business: Ideal for transmitting financial data, sensitive communications, or proprietary information. By integrating network diodes, businesses can safeguard their information while maintaining operational efficiency. Regular Security Audits Conduct routine evaluations of your cybersecurity protocols to identify vulnerabilities. Educate Employees Human error remains one of the weakest links in cybersecurity. Regular training on phishing, password management, and secure browsing is essential. Leverage Advanced Authentication Methods Beyond 2FA, consider biometrics or hardware security keys to add further protection. Backup Critical Data Regularly back up data to an isolated location. In the event of an attack, this ensures continuity without yielding to ransom demands. The Role of Cybersecurity Policies A well-drafted cybersecurity policy is the cornerstone of preventive efforts. Ensure that policies include: Access Controls: Limit access to sensitive systems and data based on roles. Incident Response Plans: Define clear steps for mitigating breaches. Software Updates: Mandate timely updates to close security loopholes. Truths and Myths about Linux machines. How Reliable are Linux Systems. Are they vulnerable? Truth: While Linux has a reputation for being secure, it is not immune. Many attacks target misconfigured services, weak SSH credentials, outdated software, and publicly exposed daemons (e.g., Apache, MySQL, OpenSSH). Example: Attackers brute-force SSH logins or exploit outdated WordPress installations. Compromised Business Accounts Are High-Value Targets Especially on Linux servers, root or sudo-capable users are a jackpot. Attackers can: Install cryptominers, backdoors, or keyloggers. Harvest SSH keys, database access, or API credentials. Laterally move to other systems via shared keys or VPN credentials. Logs Can Be Modified or Deleted A common myth is that “you can always find the trail in /var/log.” Truth: Skilled attackers clear or modify logs, or use rootkits to hide traces. Rootkits and Kernel-Level Malware Exist Tools like LKM rootkits can make malicious processes, files, or ports invisible. Hard to detect with normal tools (ps, netstat, ls). Business Email Compromise (BEC) Often Starts from a Linux Box Especially if that box hosts email servers, webmail portals, or SMTP relays. Myths “Only Windows Gets Viruses” False: Linux malware exists and is growing. Examples include: Mirai, Gafgyt, Xor.DDoS (IoT/Linux botnets). EvilGnome, Turla, HiddenWasp (targeted malware). “My Server Has No GUI, So It’s Safe” False: Headless servers are often more vulnerable because they: Are exposed to the internet. May run outdated command-line-only services (e.g., nginx, Postfix). “SELinux/AppArmor Guarantees Safety” False: These tools reduce damage, but are often: Disabled, misconfigured, or bypassed via privilege escalation. “Strong Passwords Are Enough” False: Even with strong passwords, if SSH keys, sudo privileges, or known vulnerabilities (e.g., sudoedit, dirty pipe) are present — you’re still at risk. “If There’s No Traffic Spike, Everything’s Fine” False: Many modern attacks are stealthy: Use low-and-slow exfiltration. Set up reverse shells to await commands. Operate during off-hours. Conclusion While recovering from a hacked business account is possible, the time, money, and resources spent on remediation underscore the value of prevention. Technologies like network diodes, combined with robust security policies and employee education, can significantly reduce the risk of cyberattacks. By taking proactive steps today, businesses can secure their digital assets and focus on what matters most: growth and innovation. Taking preventive measures is not just a smart choice—it’s a necessary one in today’s digital landscape. Don’t wait until it’s too late; prioritize cybersecurity now. The post Recovering Business Accounts That Have Been Hacked: Best Approach Is Prevention appeared first on Unixmen.
  9. by: Zell Liew Mon, 04 Aug 2025 13:14:45 +0000 As a front-end developer, I’ve been pretty curious about how other people code up their websites. So I tend to poke my head into design systems whenever I find one. Then, late last year, a conversation with Geoff Graham set me off thinking even deeper about theming websites. (If you don’t know that dude, he’s the chief editor of this site, so that guy’s a pretty big deal, too.) So I’ve been watching, pondering, and exploring: How can we create better themes? How can we allow increased flexibility in theming? How can we allow more colors to be used so that sites can be more alive and dimensional instead of being so flat all the time? Today, I want to discuss a couple of patterns that the community is using, and how I propose we can improve, so we achieve both flexibility and beauty. Hope you’re ready to go on a wild ride with me! Color Palettes Let’s begin from the beginning. After all, how can you discuss theming without including colors into the site? I think this problem is pretty much solved by now. Everyone has adopted systems that allows for various hues — along with multiple tints and shades — that can give some life to the design. We don’t need to go very far to see this trend happening. For example, Tailwind CSS includes a ton of colors and their respective tones. Open Props by Adam Argyle provides even more tones, up to 13 per color. And Pico CSS ups the ante by introducing 19 different tones per color. Er… this is not a race, so the number of tones doesn’t really matter. What’s important is you get sufficient color tones to create subtle effects for various parts of the design. Designing Your Own Palette Color palettes provided by these libraries and frameworks can be good starting points, but I’d argue that you almost never want to use them. Why? Because colors create differentiation; differentiation creates distinction; distinction creates identity. You probably know this is true without me telling you. Sites that use Bootstrap, look like Bootstrap. Sites that use Tailwind, look like Tailwind. Sites that use shadcn, look like that too… Of course there are makers who can break the mould, use Tailwind, and make it completely not like Tailwind. But that’s because they tweak many things. Color is one of these things — one of the most important ones — but other important pieces include typography, spacing, the roundness of your corners… and many others. Covering those is a story for another day, and perhaps best covered by Jack McDade who teaches Radical Design. So, if you don’t wanna drown in the sea of sameness — looking like everyone else — creating your own color palettes is a first step forward. Now, you may be anxious about creating color palettes because there’s been lots of writing about the amount of work that goes into creating accessible color palettes, so that might sound like a daunting task. Plus, anything related to accessibility carries “Big Potential Consequences” and “Highly Shameful When Done Incorrectly,” so that can add extra pressure on you. Throw all those pressures away. Don’t be scared. Because if you want to create a corner of the internet that looks like you (or your company), breathes like you, acts like you, and exudes fun like you do, then gotta do what you gotta do. There are only two words you have to remember. Just two words. Sufficient contrast. And you’re set for accessibility (design-wise, at least). That’s it. Designing Color Palettes by Hand I tend to design color palettes by hand — in Figma — when I design my websites. This seems to be the most natural process for me. (Or maybe I’m just influenced by how Jack designs stuff 🙃). If you do this, there’s no need to stress about filling up tones from 50 to 950. That’s because you’ll have no idea what colors would look nice before you fit them into the design. Stressing over tones is putting the cart before the horse. Here’s a decent example. When I designed Splendid Labz, I omitted a ton of color tones. Here’s an example of the pink color variables for the site. Notice I skipped values between 50 and 400? Well, I didn’t need ’em. Notice I added 200d and 600d? Well, I kinda wanted a desaturated (or muted) variant of these colors… which… could not be captured the existing systems. So I added d for desaturated 🙃. You can see the results of that yourself. It’s not too shabby in my opinion — with splashes of color that perhaps bring some joy to your face when you scroll through the site. You get the drift, yeah? It’s not too hard. So, don’t be scared and give that a try. Designing Color Palettes Programmatically If you’re the kinda person that prefers generating colors programmatically (and of course you can hand-tweak them afterwards if you desire), here are a few generators you may fancy: RampenSau Perceptual Palettes Accessible Palette Generator Of these, I highly recommend checking out @meodai‘s RampenSau because he’s really knowledgeable about the color space and does incredible work there. (Use the monochromatic feature to make this easy.) Using the Color Palettes A thousand words later, we’re finally getting to the meat of this article. 😅 With a seemingly unlimited amount of options given by the color palettes, it makes sense to assume that we can use them however we want — but when it comes to application, most systems seem to fall short. (Even short is generous. They actually seem to be severely restricted.) For example, DaisyUI seems to support only two tones per color… Pico CSS, a system with one of the most options, on first glance, limits to 10 possible variants “semantic class names“. But if you look deeper, we’re still looking at about two tones per “thing”: Primary (one tone) Background and background hover (two tones) Border and border hover (two tone) Underline (is this the same as border? More on this below.) And so on… Which brings me to one very important and very puzzling question: I can’t answer this question because I’m not the creators behind those systems, but I’d guess these might be the possible causes: These system designers might not be as sensitive to colors as visual designers. Semantic class name confusion. Values were simply meant as guidelines. The second one a serious, and limiting, issue that we can deal with today. As for the first, I’m not saying I’m a great designer. I’m simply saying that, with what I know and have discovered, something seems to be amiss here. Anyway, let’s talk about the second point. Semantic Class Name Confusion Observing the “semantic class names” these systems use actually unveil underlying confusion about what “semantic” means to the web development community. Let’s go back to my remark about the --pico-primary-underline variable earlier with Pico CSS. But if you look deeper, we’re still looking at about two tones per “thing” Primary (one tone) Background and background hover (two tones) Border and border hover (two tones) Underline (is this the same as border? More on this below) And so on… Isn’t that an interesting remark? (I ask this question because underline and border can use the same color to create a unifying effect). From what we can see here, the term “semantic” actually means two things conflated into one: An order of hierarchy (primary, secondary, tertiary, etc) The “thing” it was supposed to style This gets even more confusing because the order of hierarchy can now be split into two parts: A color-specific order (so primary means red, secondary means blue, and so on) A use-case specific order (so a heavy button might be primary, while a light button might be secondary) Okay. I can hear you say “naming is hard.” Yes, that’s the common complaint. But “naming is hard” because we conflate and reduce things without making distinctions. I propose that: We keep the hierarchy (primary, secondary, tertiary) to the color hue. We name the strength, “oomph,” or weight of the button with a verb that describes their relative weight or appearance, like outline, light, heavy, ghost, etc. We can create the appearance variations easily with something I call The Pigment System. But perhaps that’s an article for another day. Anyway, by creating this separation, we can now create a wealth amount of color combinations without being restricted by a single hierarchical dimension. Moving on… The Second Problem With Semantics Using the same example (just for simplicity, and definitely not trying to bash Pico CSS because I think they’re doing a really good job in their own right), we see that semantics are conflated by stating its hierarchy along what its supposed to style. Examples are: --pico-primary-background --pico-primary-border These two properties result in a problem when designing and developing the site later. If you consider these questions, you’d see the problems too: First: By using --pico-primary-background… Does it mean we only have one main background color? What if we need other colors? Do we use --pico-secondary-background? What if we need more? Do we use tertiary (3rd), quaternary (4th), quinary (5th), and senary (6th) for other colors? Second: What if we have variants of the same color? Do we use things like --pico-primary-background-1, 2, 3, and so on? Third: Now, what if I need the same color for the --pico-primary-background and the --pico-primary-border of the same component? But I’d need another color for a second one? This starts getting confusing and “semantics” begins to lose its meaning. What Semantics Actually Mean Consulting Etymology and the dictionary gives us clues about how to actually be semantic — and keep meaning. Two things we can see here: Semantics mean to indicate by a sign. It can be related to meaning or logic. What I’m noticing is that people generally ascribe “semantics” to words, as if only words can convey meanings and numbers cannot… But what if we broaden our scope and view numbers as semantic too — since we know 100 is a much lighter tint and 900 is a dark shade, isn’t that semantics showing through numbers? Better Semantics We already have a perfectly usable semantic system — using numbers — through the color palettes. This is highly semantic! What we simply need is to adjust it such that we can use the system to easily theme anything. How? Simple. I made the argument above that the hierarchy (primary, secondary, etc.) should be used to refer to the colors. Then, if you have use pink color as your main (hence primary) color… You can simply set another color, say orange as your secondary color! (Duh? Yeah, it’s obvious in hindsight.) Implementing this into our code, we can do a one-to-one port between hierarchy and hues. If you do this via CSS, it can be manual and not very fun… .theme-pink { --color-primary-100: var(--color-pink-100); --color-primary-200: var(--color-pink-200); --color-primary-300: var(--color-pink-300); /* and so on ...*/ --color-secondary-100: var(--color-orange-100); --color-secondary-200: var(--color-orange-200); --color-secondary-300: var(--color-orange-300); /* and so on ...*/ } With Sass, you can run a quick loop and you’ll get these values quickly. $themes: ( pink: ( primary: pink, secondary: orange ) ); $color-tones: 100, 200, 300, 400, 500, 600, 700, 800, 900; @each $theme-name, $theme-colors in $themes { .theme-#{$theme-name} { @each $tone in $color-tones { --color-primary-#{$tone}: var(--color-#{map-get($theme-colors, primary)}-#{$tone}); --color-secondary-#{$tone}: var(--color-#{map-get($theme-colors, secondary)}-#{$tone}); } } } For Tailwind users, you could do a loop via a Tailwind plugin in v3, but I’m not quite sure how you would do this in v4. // The plugin code const plugin = require('tailwindcss/plugin') module.exports = plugin(function ({ addUtilities, theme }) { const splendidThemes = theme('splendidThemes', {}) const palette = theme('colors') // Collect all unique tone keys used by any color in any theme const allTones = new Set() Object.values(splendidThemes).forEach(themeConfig => { Object.values(themeConfig).forEach(colorName => { if (palette[colorName]) { Object.keys(palette[colorName]).forEach(tone => allTones.add(tone)) } }) }) const utilities = {} Object.entries(splendidThemes).forEach(([themeName, themeConfig]) => { const themeClass = {} Object.entries(themeConfig).forEach(([role, colorName]) => { if (!palette[colorName]) return allTones.forEach(tone => { if (palette[colorName][tone] !== undefined) { themeClass[`--color-${role}-${tone}`] = `var(--color-${colorName}-${tone})` } }) }) utilities[`.theme-${themeName}`] = themeClass }) addUtilities(utilities) }) // Using it in Tailwind v3 module.exports = { plugins: [ require('path-to-splendid-themes.js') ] theme: { splendidThemes: { pink: { primary: 'pink', secondary: 'orange' }, blue: { primary: 'blue', secondary: 'purple', tertiary: 'orange' } } }, } Will this generate a lot of CSS variables? Yes. But will in affect performance? Maybe, but I’d guess it won’t affect performance much, since this code is just a couple of bytes more. (Images, by contrast, weigh thousands of times more than these variables do.) And now we no longer need to worry about knowing whether background-1 or background-2 is the right keyword. We can simply use the semantic numerals in our components: .card { background-color: var(--color-primary-500) } .card-muted { background-color: var(--color-primary-700); } One More Note on Semantics I think most frameworks get it right by creating component-level semantics. This makes a ton of sense. For example, with Pico CSS, you can do this: In your own creations, you might want to reduce the amount of namespaces (so you write less code; it’s less tedious, yeah?): .card-namespaced { --card-header-bg: var(--color-primary-600); } .card-without-namespace { --header-bg: var(--color-primary-600); } No “extra semantics” or even “namespacing” needed when the project doesn’t require it. Feel free to Keep It Simple and Sweet. This brings me to a separate point on component vs global variables. Global Variables Some variables should be global because they can propagate through the entirety of your site without you lifting a finger (that is, once you design the CSS variables appropriately). An example of this with borders: :root { --border-width: 1px; --border-style: solid; --border-color: var(--color-neutral-700); } You can change the global --border-color variable and adjust everything at once. Great! To use this sorta thing, you have to build your components with those variables in mind. .card { border: var(--border-width) var(--border-style) var(--border-color); } This can be easily created with Tailwind utilities or Sass mixins. (Tailwind utilities can be convenient Sass mixins). @utility border-scaffold { border: var(--border-width) var(--border-style) var(--border-color); border-radius: var(--radius); } Then we can easily apply them to the component: .card { @apply border-scaffold; } To change the theme of the card, we can simply change the --border-color variable, without needing to include the card-border namespace. .card-red { --border-color: var(--color-red-500); } This way, authors get the ability to create multiple variations of the component without having to repeat the namespace variable. (See, even the component namespace is unnecessary.) .pico-card-red { --pico-card-background-color: var(--color-red-500); } .card-red { --bg-color: var(--color-red-500); } Now, I know we’re talking about colors and theming, and we segued into design systems and coding… but can you see that there’s a way to create a system that makes styling much easier and much more effective? Well, I’ve been pondering this kinda thing a lot over at Splendid Labz, specifically in Splendid Styles. Take a look if you are interested. Enough tooting my own horn! Let’s go back to theming! I think here are some other values that you might want to consider in your global variables: :root { --border-width: ...; --border-style: ...; --border-color: ...; --outline-width: ...; --outline-style: ...; --outline-focus-color: ...; --outline-offset: ...; --transition-duration: ...; --transition-delay: ...; --transition-easing: ...; } How Important is All of This? It depends on what you need. People who need a single theme can skip the entire conversation we hashed out above because they can just use the color palettes and call it a day. .card { background: var(--color-pink-500); color: var(--color-pink-900); } For those who need multiple themes with a simple design, perhaps the stuff that Pico CSS, DaisyUI, and other frameworks have provided is sufficient. What it takes to create a DaisyUI Theme. Side Rant: Notice that DaisyUI contains variables for --color-success and --color-danger? Why? Isn’t it obvious and consistent enough that you can use --color-red for errors directly in your code? Why create an unnecessary abstraction? And why subject yourself to their limitations? Anyway, rant end. You get my point. For those who want flexibility and lots of possible color shades to play with, you’ll need a more robust system like the one I suggested. This whole thing reminds me of Jason’s Cohen’s article, “Rare things become common at scale”: what is okay at a lower level becomes not okay at a larger scale. So, take what you need. Improve what you wish to. And may this help you through your development journey. If you wanna check out what I’ve created for my design system, head over to Splendid Styles. The documentation may still be lacking when this post gets published, but I’m trying to complete that as soon as I can. And if you’re interested in the same amount of rigour I’ve described in this article — but applied to CSS layouts — consider checking out Splendid Layouts too. I haven’t been able to look back after I started using it. Have fun theming! Thinking Deeply About Theming and Color Naming originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  10. By: Janus Atienza Sun, 03 Aug 2025 17:04:54 +0000 Image source: Pixabay Developers face unique security challenges that go beyond typical desktop users. Your development machine contains source code, API keys, database credentials, and often sensitive client information. A compromised developer workstation can lead to supply chain attacks, data breaches, and stolen intellectual property. Linux offers powerful security features, but they require proper configuration to be effective. This guide covers essential hardening practices that will protect your development environment without hindering your productivity. Understanding Developer Security Risks Development environments are attractive targets for several reasons. Attackers know that developer machines often contain valuable source code, deployment credentials, and access to production systems. The tools developers use daily, from package managers to IDE plugins, create additional attack surfaces. Common threats include malicious packages in repositories, compromised development tools, and social engineering attacks targeting developers. The interconnected nature of modern development workflows means a single compromised machine can potentially affect entire projects or organizations. Starting with a Secure Foundation Your security starts with choosing the right Linux distribution and configuring it properly from the beginning. Ubuntu LTS, CentOS, and Fedora all offer strong security features, but they need proper setup. Enable full disk encryption during installation using LUKS. This protects your data if your laptop gets stolen or lost. Configure secure boot if your hardware supports it, and perform a minimal installation to reduce the attack surface. After installation, disable unnecessary services immediately. Use systemctl list-unit-files to see what’s running and disable anything you don’t need. Enable automatic security updates, but test them in a non-critical environment first. Implementing Strong Access Controls Follow the principle of least privilege by creating separate user accounts for different projects or clients. Many custom software developers handle multiple client codebases that require strict separation to meet confidentiality requirements. Configure sudo properly by editing the sudoers file with visudo. Instead of giving users blanket sudo access, grant specific permissions for necessary commands only. Use groups to manage permissions efficiently rather than setting individual user permissions. Set up SSH key-based authentication and disable password authentication entirely. Generate strong SSH keys with ssh-keygen -t ed25519 -b 4096 and protect them with passphrases. Configure automatic screen locking and session timeouts to protect against unauthorized physical access. Configuring Network Security A properly configured firewall is your first line of defense against network attacks. Use UFW (Uncomplicated Firewall) for a user-friendly interface to iptables. Start with a default deny policy and only allow necessary connections. bash sudo ufw default deny incoming sudo ufw default allow outgoing sudo ufw allow ssh sudo ufw enable If you work remotely, set up a VPN connection to your office or use a trusted VPN service. Configure fail2ban to automatically block IP addresses that show suspicious activity, particularly repeated failed login attempts. Monitor your network connections regularly with netstat -tulpn or ss -tulpn to identify unexpected connections. Any unfamiliar listening services should be investigated immediately. Securing Development Tools Your IDE and text editor plugins can introduce security vulnerabilities. Only install plugins from trusted sources and review their permissions carefully. Disable telemetry and data collection features that might leak information about your projects. Configure Git securely by setting up GPG signing for your commits. This proves the authenticity of your code changes and prevents impersonation. Use git config –global commit.gpgsign true to enable automatic signing. For container-based development, never run Docker containers with privileged access unless absolutely necessary. Use user namespaces to map container root to unprivileged users on the host system. Protecting Code and Sensitive Data Set proper file permissions on your projects. Use chmod 700 for directories containing sensitive code and chmod 600 for files with credentials or API keys. Consider using encrypted directories with tools like EncFS for particularly sensitive projects. Create encrypted backups of your work and store them securely offsite. Test your backup restoration process regularly to ensure you can recover your data when needed. Never store credentials in your source code. Use environment variables or dedicated secret management tools instead. Tools like git-secrets can help prevent accidental credential commits. System Monitoring and Logging Configure comprehensive logging to detect suspicious activity. Most Linux distributions use systemd journaling, but you may want to set up traditional syslog as well for compatibility with monitoring tools. Install and configure a host-based intrusion detection system like OSSEC or Samhain. These tools can detect unauthorized file changes, suspicious processes, and potential security breaches. Set up file integrity monitoring for critical system files and your source code directories. The AIDE (Advanced Intrusion Detection Environment) tool can alert you to unexpected changes. Managing Dependencies Securely Modern development relies heavily on third-party packages and libraries. Regularly scan your dependencies for known vulnerabilities using tools like npm audit for Node.js projects or pip-audit for Python. Consider setting up private package repositories for your organization to have better control over the packages your projects use. This also protects against supply chain attacks where malicious code gets injected into popular packages. Keeping Your System Updated Enable automatic security updates, but be cautious with automatic feature updates that might break your development environment. Use unattended-upgrades on Ubuntu/Debian systems: bash sudo apt install unattended-upgrades sudo dpkg-reconfigure unattended-upgrades Regularly audit your system with tools like Lynis, which can identify security configuration issues and suggest improvements. Run sudo lynis audit system monthly to catch potential problems. Essential Security Checklist Start with these immediate actions: enable full disk encryption, configure a firewall, disable unnecessary services, set up automatic security updates, create separate user accounts for different projects, configure SSH key authentication, enable fail2ban, and set up encrypted backups. Monthly tasks include reviewing system logs, updating all software packages, scanning dependencies for vulnerabilities, and reviewing user accounts and permissions. Conclusion Securing your Linux development environment requires ongoing attention, but the effort pays off in protecting your work and your clients’ data. Start with the basics like encryption and firewalls, then gradually implement more advanced security measures as your comfort level increases. Remember that security is about balancing protection with productivity. The goal is to create a secure environment that supports your development work rather than hindering it. Regular maintenance and staying informed about new threats will keep your development environment secure over time. The post Essential Linux Security Hardening for Development Environments appeared first on Unixmen.
  11. by: Abhishek Prakash Sun, 03 Aug 2025 17:28:56 +0530 When your Linux system is generating thousands of log entries every minute, finding information about a specific service can feel like searching for a needle in a haystack. That's where journalctl's powerful filtering capabilities come to the rescue! To filter journalctl logs by a specific service, use the service name in the following manner: journalctl -u servicename The -u flag (short for "unit") is your primary tool for filtering logs by service. It tells journalctl to show only entries related to a specific systemd unit. journalctl -u mysql The above command shows mysql log entries: Jun 23 07:51:39 ghost-learnubuntu systemd[1]: Starting MySQL Community Server... Jun 23 07:51:47 ghost-learnubuntu systemd[1]: Started MySQL Community Server. Jun 23 07:56:53 ghost-learnubuntu systemd[1]: Stopping MySQL Community Server... Jun 23 07:56:56 ghost-learnubuntu systemd[1]: mysql.service: Succeeded. Jun 23 07:56:56 ghost-learnubuntu systemd[1]: Stopped MySQL Community Server. Jun 23 07:56:56 ghost-learnubuntu systemd[1]: Starting MySQL Community Server... Here are some other real-world examples of extracting logs of certain services: # View Apache logs journalctl -u apache2 # View SSH daemon logs journalctl -u ssh # View Docker logs journalctl -u docker Finding the right service nameBefore you can filter, you need to know the exact service name. One way would to be to list all systemd services: systemctl list-units --type=service This shows all currently loaded services with their exact names. UNIT LOAD ACTIVE SUB DESCRIPTION accounts-daemon.service loaded active running Accounts Service acpid.service loaded active running ACPI event daemon alsa-restore.service loaded active exited Save/Restore Sound Card State apparmor.service loaded active exited Load AppArmor profiles apport.service loaded active exited LSB: automatic crash report generationAs you can see above, there are five services visible. You may omit the .service part of the unit name. Instead of apport.service, you could just use apport. However, some services may have both .service and .socket units. The problem here is that you'll have hundreds of services running. Scrolling through all of them is a waste of time. Make use of the good old grep command. For example, find services containing "ssh" in their name. systemctl list-units --type=service | grep ssh This is because some distributions use sshd for ssh service name. So getting the correct name by filtering helps to get the ssh logs correctly. 🚧Service names are case sensitive. nginx is NOT the same as Nginx.More tips on filtering journal logs by servicesYou learned to show journalctl logs for a specific service. But there could still be way too much or perhaps way too little logs. Let's see more ways of expanding or narrowing down your log filtering by services. Multiple services at onceMonitor multiple services simultaneously - perfect for troubleshooting interconnected applications! journalctl -u nginx -u mysql -u redis Service patterns with wildcardsView logs from all services starting with "docker". Note the quotes around the pattern! journalctl -u 'docker*' Service logs with contextBy default, you get all the logs of the specific service. Reduce the noise by showing the last 50 log entries only. journalctl -u servicename -n 50 Reverse chronological orderShow newest entries first - great for finding recent issues quickly. journalctl -u servicename -r 💡Some services have multiple related units. Docker, for example, has docker.service, docker.socket, and depends on containerd. While troubleshooting an issue with docker, you may want to also look at the dependent unit logs.Combine with time filteringGet journal logs for a specific date range instead of displaying all the possible logs for the service, journalctl -u servicename --since "2024-01-01" --until "2024-01-02" Add more time frames as needed: --since "2023-01-01"--since "yesterday"--since "10 minutes ago"--since "1 hour ago"Priority-based service filteringShow only error-level messages from a service. journalctl -u servicename -p err You can use other priority levels include: debug (7)info (6)notice (5)warning (4)err (3)crit (2)alert (1)emerg (0)Logs from most recent service bootBy defaulShow only logs from the current boot session. journalctl -u servicename -b Follow service logs in real-timeWatch journalctl logs as they happen - perfect for live troubleshooting! journalctl -u servicename -f 💡You can combine multiple filters to narrow down your search.📋Summary Option Syntax Description Example Basic Service Filter -u servicename Show logs for specific service journalctl -u mysql Multiple Services -u service1 -u service2 Monitor multiple services simultaneously journalctl -u nginx -u mysql -u redis Service Pattern -u 'pattern*' View logs from services matching pattern journalctl -u 'docker*' Last N Lines -u servicename -n N Show last N log entries for service journalctl -u servicename -n 50 Reverse Order -u servicename -r Show newest entries first journalctl -u servicename -r Time Range -u servicename --since "start" --until "end" Get logs for specific date range journalctl -u servicename --since "2024-01-01" --until "2024-01-02" Time Since -u servicename --since "time" Show logs since specific time journalctl -u servicename --since "yesterday" Priority Filter -u servicename -p priority Show only specific priority level messages journalctl -u servicename -p err Current Boot -u servicename -b Show logs from current boot session only journalctl -u servicename -b Follow Real-time -u servicename -f Watch logs in real-time as they happen journalctl -u servicename -f Wrapping UpFiltering journalctl logs by service is an essential skill for Linux system administration. It transforms overwhelming log output into focused, actionable information. Start with the basic -u flag, then gradually incorporate time filters, priority levels, and output formatting as your needs become more sophisticated. Remember: the key to effective service log analysis is knowing your service names, understanding the relationships between services, and using the right combination of filters to find exactly what you're looking for. Happy service debugging!
  12. by: Abhishek Prakash Sat, 02 Aug 2025 14:27:16 GMT I know that there is pretty extensive online documentation from Ubuntu available for free. But extensive can also be overwhelming too. Probably that's the reason why there is a new book on Ubuntu, unsurprisingly called "The Ultimate Ubuntu Handbook" and it gives you a good overview of the Ubuntu as a desktop, as a server and as a developer platform. This book is written by Ken VanDine, a Linux veteran with over 16 years of work experience at Canonical, the parent company of Ubuntu. He primarily worked on GNOME, Ubuntu Desktop, and Snap integration. Ken also has over 30 years of experience in building Linux distros. I therefore wondered what perspective someone like Ken, an Ubuntu insider, would take when he decided to write a book about the operating system. Turns out, it's something for everyone who wants to use Ubuntu as their daily driver. The Ultimate Ubuntu HandbookAs the title suggests, this is a book dedicated to Linux users who are using Ubuntu on their desktop or server. The book contains 19 chapters divided into these four parts. Part 1: Introduction to Ubuntu Starting with a brief history and philosophy of Ubuntu, the book moves on to dedicate a chapter on what's new in version 24.04. Yes, the book is focused on the current LTS release. It then lists the advantages of using Ubuntu, followed by an installation guide. All the spans over four chapters. Part 2: Getting the most out of Ubuntu systemThe next six chapters are about using and understanding some basic but essential concepts for using Ubuntu. It starts with a chapter on exploring the Ubuntu desktop and then moves on to dedicated chapters on package management and handing updates and the best practices that should be followed. A dedicated chapter on getting help may seem overkill but beginners will find it helpful. There is also a chapter introducing 'Ubuntu Landscape', an enterprise tool for managing your fleet of Ubuntu servers. The last chapter in this section lists command line tricks and shortcuts which is basically a short introduction to essential Linux command usage such as finding files, text, disk usage etc. Part 3: Network security and privacyThis section has three chapters and the first one introduces basic security landscape. The second chapter is dedicated to using ufw firewall. The third chapter discusses TPM and disk encryption with LUKS and home directory encryption with eCryptfs., Part 4: Ubuntu as a development platformThe last part of the book introduces Ubuntu as a developer focused platform. There are dedicated chapters on LXD (containerization), Multipass (for cloud-style virtualization) and MicroK8s (for Kubernetes). nWhat I likeThis is a good introduction for someone who is familiar with Linux to some extent and wants to use it as a daily driver or main development environment. The book is also filled with callouts to highlight important details. The chapter on Kubernetes features Microk8s, which is a handy tool for local Kubernetes deployment. This is smart and thoughtful. Most Kubernetes setup involves deploying multiple servers and that is not feasible for everyone. The book has something for all kinds of Ubuntu users. You don't need to read the entire book, but you should find around 50% of the book suitable for your usecase irrespective of whether you are a desktop user, a sysadmin, devops or developer. Of course, Ubuntu should be your choice of Linux system here. What I do not likeI feel that some times the book relies too much on text. For example, there is a section that discusses using the Software Center and I feel that it could have included more screenshots. Since there is so much to cover, the book sometimes only touches the surface of a topic. For example, the chapter on packages doesn't discuss the concept of sources.list. That's something that needs to be talked about, especially when it comes to fixing mistakes that might happen when third-party repositories are added without thinking. But I understand that the author can also not go too much deep on a specific topic. If we have to cover packaging in Ubuntu in deep, it can be a book in itself. Do we need a book in the age of unlimited internet and AI?Well, yes and no. Looking for a specific information on the search engines can be daunting. We live in the age of information overload and getting the precise information from a trusted source is a challenge. A book solves this problem. Another thing is that you can revisit sections of your favorite book as you know exactly where to look for a certain detail. A quick web search or AI query may seem quicker but you may not get the same answer that you were hoping to look for. This is why I prefer having my own knowledge base with Obsidian or Logseq. I organize my own notes and refer to them when needed. These notes also contain snippets from various Linux books I read. ConclusionWhether you’re new to Ubuntu or have been using it for years, The Ultimate Ubuntu Handbook offers a wealth of practical tips, time-saving tricks, and insider insights that will help you get even more out of your Ubuntu experience. Even experienced users will discover useful features and best practices they might not have explored on their own, making it a valuable companion for refining your workflow and boosting your productivity. This isn’t just a reference; it’s a hands-on guide that makes Ubuntu easier, more secure, and more powerful for everyday use. If you want to confidently navigate Ubuntu 24.04 and unlock its full potential, this book is a must-have addition to your collection. The author, Ken VanDine, has been working at Ubuntu for more than a decade, and he has a good understanding of what a "typical Ubuntu user" would want to learn, and it duly reflects in his book. The Ultimate Ubuntu Handbook is available in print and digital format both on Packt and Amazon website. The Ultimate Ubuntu Handbook on PacktThe Ultimate Ubuntu Handbook on Amazon
  13. by: Abhishek Prakash Fri, 01 Aug 2025 17:07:03 +0530 In case you missed the announcement last week, our latest course on systemd is now available. The systemd Playbook: Learn by DoingMaster systemd the practical way—one lab at a time.Linux HandbookTeam LHBNext, I am working on an ebook on Linux commands. It will be ready by next month. Stay tuned for it. Take advantage of the "lifetime membership" offer. For a single payment, you get the paid membership to download all 6 ebooks and access all premium courses. No recurring payment, you pay just once and enjoy forever. First 50 lifetime members get it for just $99 instead of $139. Only 21 more spots are available. Hurry up! Get Lifetime membership for just $99 (instead of $139)       This post is for subscribers only Subscribe now Already have an account? Sign in
  14. By: Janus Atienza Thu, 31 Jul 2025 19:37:46 +0000 In today’s competitive business landscape, selecting a CRM development company is not just about finding a vendor – it’s about choosing a long-term technology partner that understands your workflows, infrastructure, and growth plans. For businesses operating in Linux/Unix environments, this decision carries even more weight, as compatibility and performance directly impact day-to-day operations. For companies relying on Linux or Unix-based infrastructure — which account for a significant portion of enterprise servers worldwide — the choice of CRM provider can directly influence performance, scalability, and security. In this article, we’ll explore why Linux/Unix environments are integral to CRM solutions, what features to seek in a development partner, and how this combination drives long-term success. The Role of CRM in Modern Business A CRM system acts as the digital heartbeat of any customer-facing operation. It centralizes data, automates tasks, and delivers insights that empower sales teams and marketing departments to work smarter, not harder. Whether it’s tracking leads, nurturing customer relationships, or personalizing support interactions, a robust CRM platform can transform the way a business operates. However, a CRM is not one-size-fits-all. Industries such as healthcare, finance, and manufacturing often require custom solutions tailored to unique workflows. This is where partnering with a specialized CRM development company becomes critical — especially one with the technical expertise to align the solution with existing infrastructure like Linux or Unix servers. Understanding Linux and Unix in the Enterprise Before diving into CRM development specifics, let’s clarify why Linux and Unix matter in this discussion. 1. Proven Stability and Reliability Unix-based systems, including various Linux distributions, have earned their reputation for rock-solid stability. Major corporations, financial institutions, and government agencies have trusted Unix for decades because downtime is not an option in mission-critical environments. CRM systems built to leverage this stability can run 24/7 with minimal maintenance. 2. Open-Source Flexibility Linux’s open-source nature allows businesses to customize their server environments extensively. This flexibility is particularly beneficial when building custom CRM applications, enabling developers to integrate specialized modules, optimize system performance, and ensure compatibility with third-party tools. 3. Security Advantages Security is paramount in CRM, where sensitive customer data is at stake. Linux and Unix systems offer robust permission structures, SELinux policies, and active open-source communities that rapidly address vulnerabilities. A CRM built for these environments can incorporate advanced firewalls, encryption standards, and compliance frameworks such as GDPR or HIPAA. 4. Cost Efficiency and Scalability Unlike proprietary systems that demand expensive licenses, Linux-based solutions reduce costs while allowing for horizontal scalability. Businesses can expand their CRM capabilities — adding more users, modules, or servers — without breaking budgets. 5. Cross-Platform Capabilities Many modern organizations run hybrid environments. A CRM development company experienced with Linux/Unix can ensure seamless integration with Windows-based desktop applications, cloud platforms like AWS or Azure, and containerized deployments. Why the Right CRM Development Company Is Critical Not all CRM developers are created equal. While many can deliver an out-of-the-box solution, only a specialized CRM development company can design, build, and maintain systems optimized for Linux/Unix environments. Here are the key reasons why expertise matters: Deep Knowledge of System Architecture Linux and Unix systems differ fundamentally from Windows in terms of file structures, process management, and networking. A competent CRM developer understands these nuances, ensuring the CRM takes full advantage of system resources while avoiding compatibility pitfalls. Experience with Open-Source CRM Frameworks Many custom CRMs leverage open-source frameworks like SuiteCRM, Odoo, or Dolibarr, which run exceptionally well on Linux servers. Developers familiar with these ecosystems can accelerate implementation while maintaining flexibility for customization. Integration with DevOps and Cloud Tools Modern CRM solutions often rely on containerization (Docker, Kubernetes), CI/CD pipelines, and cloud-native technologies. A Linux-friendly CRM company ensures seamless integration, whether you’re hosting on-premises, in the cloud, or using a hybrid model. Long-Term Support and Maintenance CRMs evolve with business needs. Companies with expertise in Linux/Unix can provide ongoing support, apply kernel updates, manage security patches, and optimize performance without disrupting daily operations. Key Features to Look for in a Linux/Unix-Focused CRM Development Company When selecting a partner, consider the following checklist: Proven Track Record Look for case studies or portfolios demonstrating successful CRM projects on Linux or Unix systems.  Customization Capabilities The ability to tailor modules — from sales pipelines to customer service dashboards — ensures your CRM aligns with your business model.  Security Expertise Knowledge of Linux security tools like iptables, SELinux, and encryption libraries is vital for safeguarding customer data.  Performance Optimization The company should know how to configure services like Apache, Nginx, or MySQL/MariaDB for maximum CRM performance.  Scalable Architecture Support for microservices, load balancing, and distributed databases helps future-proof your CRM as your business grows.  Post-Deployment Support Maintenance, training, and continuous upgrades are as important as the initial build. Advantages of Linux/Unix-Based CRM Solutions Cost Savings Without Compromising Quality Open-source platforms eliminate costly licensing fees. Combined with a custom development approach, businesses achieve enterprise-grade capabilities at a fraction of the cost of proprietary CRMs. High Performance Under Heavy Loads Linux/Unix servers are designed for multitasking and high concurrency. A well-optimized CRM can handle thousands of simultaneous transactions without latency issues. Enhanced Security and Compliance Built-in security features reduce risks of breaches, while compliance with regulations like GDPR or PCI-DSS becomes easier to implement. Flexibility and Future-Proofing Whether adopting AI-driven customer insights or integrating IoT data, Linux-based CRMs provide a flexible foundation for innovation. Practical Use Cases: Linux/Unix in CRM Development Case Study: Retail Industry A global retailer running POS systems on Linux required a centralized CRM to unify customer data from multiple regions. By partnering with a Linux-savvy CRM development company, they built a scalable solution integrated with inventory management and loyalty programs. Case Study: Healthcare Hospitals using Unix-based servers for patient records implemented a secure CRM to track patient communications and follow-up schedules. The system complied with HIPAA standards and integrated seamlessly with existing Unix infrastructure. Case Study: SaaS Providers A SaaS startup offering analytics tools deployed its CRM in a Kubernetes cluster on Linux, achieving near-zero downtime and cost-effective scaling during rapid user growth. Future Trends in CRM and Linux/Unix AI-Powered Automation: Predictive analytics and chatbots will become standard features in Linux-based CRMs.  Edge Computing: With IoT growth, CRMs will process data closer to the source, leveraging lightweight Linux distributions.  Hybrid Cloud Deployments: Combining on-prem Unix systems with public cloud Linux servers will dominate enterprise architectures.  Increased Security Hardening: Expect more emphasis on encryption, container security, and zero-trust frameworks. Conclusion Choosing the right CRM development company is about more than just building a customer database — it’s about aligning technology with your business goals and infrastructure. For organizations running Linux/Unix systems, this alignment ensures cost efficiency, performance, and long-term scalability. By selecting a development partner with expertise in these environments, businesses can unlock the full potential of CRM — transforming customer relationships, streamlining operations, and staying competitive in a digital-first world. The post Why Choosing the Right CRM Development Company Matters: A Focus on Linux/Unix Environments appeared first on Unixmen.
  15. by: Abhishek Prakash Thu, 31 Jul 2025 15:42:54 +0530 This post is for subscribers only Subscribe now Already have an account? Sign in
  16. by: Abhishek Prakash Thu, 31 Jul 2025 13:05:49 +0530 The classic way of following a log file in real time is by using the tail -f command. But you know that journalctl logs are a different entity and they are not like plain text files in syslogs. So, how do you tail jounral logs then? It's simple. You use the similar -f option: journalctl -fThe -f flag (short for "follow") works just like tail -f but for systemd journals. This shows the most recent log entries and keeps the output open, displaying new entries as they arrive. 💡You probably know it already but still, use Ctrl+C to stop tailing journal logs.Filter journalctl logs as you follow themNow, just tailing logs as new entries are written can be overwhelming, specially on a busy Linux server. You are looking for particular information, so it is always better to filter them. I understand that using grep is tempting here but let me tell you that journalctl has several built-in filters you can use to get relevant information from your logs. Let's see various examples. Filter journal logs by serviceWant to watch only SSH logs? Use the -u (unit) flag! This is incredibly useful when troubleshooting specific services. journalctl -u ssh -f 0:00 /0:04 1× Unit should be a valid name. So if you are in doubt, list the systemd services and get the exact name. 💡Service names are case-sensitive: nginx ≠ NGINXFilter logs based on priority levelThe -p option of journalctl allows you to filter logs of certain priority levels. These priority levels are: emerg (0): Emergencyalert (1): Alertcrit (2): Criticalerr (3): Errorwarning (4): Warningnotice (5): Noticeinfo (6): Infodebug (7): DebugOnly show error messages and above. journalctl -p err -f The above command will show logs with priority 3 (error), 2, 1 and 0. You can also use numbers instead of mnemonics. journalctl -p3 -f💡You might need to use sudo to access certain system logs.Time-based filteringYou can tail logs in real time along with some older logs for context. journalctl --since "1 hour ago" -f The command above starts tailing from entries that occurred in the last hour. You can also use other time-based filters such as: --since "2023-01-01"--since "yesterday"--since "10 minutes ago"Another example that watches both nginx and your application to help you troubleshoot: journalctl -u nginx -u myapp -f --since "5 minutes ago" 💡journalctl displays times in your system's local timezone by default.Add initial lines for contextAnother way to add context is by including last few lines. journalctl -n 50 -f This shows the last 50 lines and then follows new entries. Replace 50 with however many lines you want to see initially. Filter kernel messages onlyFocus solely on kernel messages using the -k flag. journalctl -k -f Combine multiple services and filters to narrow down your searchYou can combine services as well as filters to add to be more specific with your log analysis. journalctl -u service1 -u service2 -p error -fFor example, the command below will show warnings from apache2 and ssh in real time along with 20 lines from the earlier stored logs. journalctl -u apache2 -u ssh -p warning -n 20 -f This powerful combination shows: Only Apache logs (-u apache2)Warning level and above (-p warning)Last 20 lines initially (-n 20)Following new entries (-f)Use grep for extra filteringIf you are lost, the classic grep command is always there. Combine with grep to filter for specific patterns. journalctl -f | grep "ERROR" 📋 Summary Command Purpose journalctl -f Follow all new log entries journalctl -n 20 -f Show last 20 lines, then follow journalctl -u service -f Follow specific service journalctl -p err -f Follow error-level or higher logs only journalctl --since "1h ago" -f Follow logs from last hour Effectively analyzing journal logsJournalctl logs can take GBs of disk space. I advise cleaning them from time to time. How to Clear Systemd Journal Logs in LinuxThis quick tutorial shows you two ways to clear systemd journal logs from your Linux system.Linux HandbookAbhishek PrakashTailing journalctl logs is an essential skill for any Linux administrator or developer. It's your window into what's happening on your system right now. Start with the basic -f flag, then gradually incorporate filters as you become more comfortable. Remember: the key to effective log monitoring is knowing what to look for and filtering out the noise. Happy log hunting! 🕵️‍♂️
  17. by: Abhishek Prakash Thu, 31 Jul 2025 04:28:12 GMT The summer holiday season starts. Time to hit the beach, mountains, or the couch 😄 My teammate Sourav is an avid gamer, and he is working on several game suggestion articles. If you play computer games, you'll find some good suggestions to enjoy your summer holidays. Diablo-like Games You Can Play With Steam on LinuxSlash, loot, and grind your way through these Diablo-like games on Linux and Steam this summer.It's FOSSSourav Rudra💬 Let's see what else you get in this edition A new Linux kernel release.Review of OpenMandriva Lx 'Rock'.Managing startup applications.Open source apps for quick file transfers.And other Linux news, tips, and, of course, memes! SPONSORED PikaPods allows you to quickly deploy your favorite open source software. All future updates are handled automatically by PikaPods while you enjoy using the software. PikaPods also share revenue with the original developers of the software. You get a $5 free credit to try it out and see if you can rely on PikaPods. Try PikaPods with $5 free credits 📰 Linux and Open Source NewsZed now allows users to disable all its AI features.Ubuntu's Linux kernel strategy has changed once again.Mastodon is implementing a donation banner in its mobile apps.Hyprland has launched Hyprperks, it's paid premium tier.Linux kernel 6.16 has arrived with a focus on AMD, Intel, and NVIDIA.Latest Linux Kernel 6.16 is all Focused on AMD, Intel, and NVIDIAThis kernel release introduces many refinements.It's FOSS NewsSourav Rudra🧠 What We’re Thinking AboutSourav took OpenMandriva Lx 'Rock' for a spin to see how it stacks against Fedora. How Does OpenMandriva Lx ‘Rock’ Stack Against Fedora? My ThoughtsI took OpenMandriva Lx ‘Rock’ for a spin this week and compared it to Fedora, my daily driver.It's FOSS NewsSourav Rudra🛒 Linux courses on Humble BundleGames are one way to spend the holidays, improving your Linux skills could be another. There is an ongoing course bundle sale on Humble Bundle that might be worth checking out. Part of your course bundle purchase is donated to Alzheimer's Research UK. Humble Video Learning Bundle: Ultimate Linux Developer Collection by PacktLearn how to code and develop for the ultra-customizable, open-source operating system, Linux! Pay what you want and help support Alzheimer’s Research UK.Humble Bundle🧮 Linux Tips, Tutorials, and LearningsLearn how to copy files between remote systems in the Linux command line.You can install Conky on Ubuntu to display CPU, memory, and other hardware information on the desktop.And a guide on handling startup applications in Ubuntu but the steps are likely to be applicable on other distros with GNOME desktop.Here are five open source apps that you can use for quick wireless transfers between Linux and Android: 5 Open Source Apps You Can use for Seamless File Transfer Between Linux and AndroidWant to share selected files between your Android smartphone and Linux computer? Explore these open source tools.It's FOSSSourav Rudra Desktop Linux is mostly neglected by the industry but loved by the community. For the past 13 years, It's FOSS has been helping people use Linux on their personal computers. And we are now facing the existential threat from AI models stealing our content. If you like what we do and would love to support our work, please become It's FOSS Plus member. It costs $24 a year (less than the cost of a burger meal each month) and you get an ad-free reading experience with the satisfaction of helping the desktop Linux community. Join It's FOSS Plus 👷 AI, Homelab and Hardware CornerLumo is Proton's privacy-focused AI assistant that looks to take on offerings by Big Tech. I Tried Proton’s Lumo AI, a Private Alternative to ChatGPTTired of ChatGPT Tracking You? Proton is now offering an end-to-end encrypted AI chats with no data logging or tracking. Here’s my experience with it.It's FOSS NewsSourav RudraTUXEDO Computers has launched the InfinityBook Pro 15 (Gen10)! ✨ Project HighlightMy Expenses is a capable finance tracking app that just works. My Expenses: A Capable Open Source Finance Tracking App for AndroidA solid finance tracking and management app for Android users.It's FOSS NewsSourav Rudra📽️ Videos I am Creating for YouThe latest video discusses system monitoring tools in Linux. Subscribe to It's FOSS YouTube Channel🧩 Quiz TimeCan you match Ubuntu releases with their respective mascots? Match Ubuntu Releases With Their Mascots: PuzzleIdentify the mascots of various Ubuntu releases and match them with their release version number. Simple.It's FOSSAbhishek Prakash💡 Quick Handy TipIn KDE Plasma's Klipper clipboard manager, you can quickly generate a QR code. With this, if you have a link in your clipboard, you can scan the generated QR code with your phone and open it. To create a new QR code, head into the Clipboard and click on the "Show QR code" button. This will give you a QR code to scan; most text entries can be shared this way for quick transfers to your phone. 🤣 Meme of the WeekBig Tech is greedy. 😒 🗓️ Tech TriviaOn August 1, 1967, the U.S. Navy recalled Grace Hopper to help standardize COBOL, a revolutionary programming language. Hopper, a computing pioneer, also developed one of the first compilers and helped popularize the term "computer bug." 🧑‍🤝‍🧑 FOSSverse CornerDoron has posted the first part of his tutorial on building reliable storage infrastructure. Building Reliable Storage Infrastructure: Filesystems, SANs, and Real-World Tactics Part 1Building Reliable Storage Infrastructure: Filesystems, SANs, and Real-World Tactics Table of Contents - Part 1 I. Introduction II. Storage System Fundamentals A. Definitions and Concepts B. Physical Components III. Block Storage and Volumes A. What Is a LUN? B. Client Access: Block Devices vs File Shares C. Cloud Block Storage in Practice D. Live Migration and Failover Implications IV. Enterprise SAN Features A. Redundancy Mechanisms B. Replication Types C. Multi-site Disaster Recovery a…It's FOSS CommunityDoron_Beit-Halahmi❤️ With lovePlease share it with your Linux-using friends and encourage them to subscribe (hint: it's here). Share the articles in Linux Subreddits and community forums. Follow us on Google News and stay updated in your News feed. Opt for It's FOSS Plus membership and support us 🙏 Enjoy FOSS 😄
  18. by: Sourav Rudra Wed, 30 Jul 2025 13:47:12 GMT Diablo is an iconic action RPG franchise built around fast-paced isometric combat, deep character customization, and the relentless pursuit of loot. Set in a dark fantasy world plagued by demonic forces, the series challenges players to battle through hordes of monsters, uncover powerful gear, and dive deep into dungeons. Over the years, new installments of Diablo have refined and expanded on the core formula, adding skill trees, seasonal content, online co-op, and endgame systems that keep players coming back. The list below highlights some of the most notable and promising Diablo-like titles available on Linux if you're looking for something different, whether that's a fresh approach, unique mechanics, or just a new world to explore and conquer. 📋The games featured here are a mix of native Linux titles and those that run well via Proton.1. Grim DawnSteam Deck status: Playable 🟡 Grim Dawn drops you into a dark, war-torn world where humanity struggles to survive against supernatural forces. The game offers excellent character customization, a rich loot system, and a sprawling isometric world filled with dangerous enemies and challenging quests. Combat is fast-paced and strategic, with a strong emphasis on building your character’s unique skill set through dual-classing and crafting. 📋Why this game? It nails the core Diablo elements: loot-driven progression, complex character builds, and an immersive dark fantasy setting that keeps players hooked.Grim Dawn2. Last EpochSteam Deck status: Verified ✅ Last Epoch takes you across time itself in a dark fantasy world on the brink of ruin. You travel between eras to uncover ancient threats, mastering a comprehensive class system packed with unique mastery trees and hundreds of skills to experiment with. Combat is fast and responsive, supporting a wide range of builds and playstyles, from minion-focused necromancers to melee berserkers. A major update, Season 3 "Beneath Ancient Skies," is scheduled for August 21, 2025, promising fresh content, reworks, and seasonal progression to keep long-term players engaged. 📋Why this game? It’s one of the most modern, polished Diablo-style ARPGs out there with great character customization, excellent combat, and a strong future roadmap.Last Epoch3. Halls of TormentSteam Deck status: Verified ✅ In Halls of Torment, you navigate haunting dungeons filled with deadly traps and monsters. The game focuses on dark, atmospheric environments and intense, tactical combat. Players collect weapons, armor, and artifacts to power up and survive increasingly difficult challenges, with the game centered around survival through increasingly difficult stages. 📋Why this game? It blends Diablo-style enemy hordes, loot mechanics, and build variety with modern auto-attack mechanics, offering an absorbing experience.Halls of Torment4. Soulstone SurvivorsSteam Deck status: Playable 🟡 Soulstone Survivors combines fast-paced action with rogue-lite elements, putting you in dungeons filled with enemies and secrets. It features various characters, each with distinct playstyles, and a loot system that rewards exploration and experimentation. The game’s combat is fluid and keeps players constantly engaged, with quick reflexes and smart positioning making all the difference in surviving tougher waves of enemies and bosses. 📋Why this game? Its blend of roguelike unpredictability and isometric ARPG combat creates a compelling Diablo-inspired experience.Soulstone Survivors5. The SlormancerSteam Deck status: Playable 🟡 Dive into a dark and vibrant world filled with relentless monsters and challenging bosses. The Slormancer combines fluid, fast-paced combat with a diverse and flexible skill system, letting you experiment with countless builds. The pixel art visuals bring a unique charm to the grim atmosphere, while hundreds of weapons and items provide endless ways to customize your character. Each dungeon run tests your strategy as you hunt for powerful gear and unlock new abilities. 📋Why this game? It offers the core Diablo experience of loot hunting and character growth but spices things up with dynamic combat and a distinctive pixel art style.The Slormancer6. Katana DragonSteam Deck status: Unknown ⚠️ Katana Dragon is a voxel-style ninja action RPG where you control twin ninjas, Shin and Nobi, who are on a quest to lift a curse threatening the land of Sogen. The game features hand-designed environments, including ancient temples, hidden dungeons, and vibrant forests. Players can learn new ninja skills, upgrade Dragon Gems, and equip Cursed Seals to enhance abilities. Combat involves using a katana and various other abilities to defeat enemies, and the game includes puzzles and traps within dungeons. 📋Why this game? It blends Diablo’s core loot and build systems with a fresh ninja theme and great Linux support, standing out among indie ARPGs.Katana Dragon7. Book of DemonsSteam Deck status: Playable 🟡 Book of Demons is a hack-and-slash action RPG with a unique "paper cut-out" art style and a simplified inventory system. Players explore dungeons filled with monsters and traps, using cards to customize skills and spells. The game features multiple character classes and a distinctive blend of traditional ARPG mechanics with innovative deck-building elements. 📋Why this game? It combines classic Diablo-like dungeon crawling and loot with a fresh, easy-to-pick-up gameplay loop.Book of Demons8. Luminaria: Dark EchoesSteam Deck status: Unknown ⚠️ Luminaria: Dark Echoes invites you into a shadowy steampunk world where each playable character offers a distinct approach to combat. The game’s dungeons and loot system keep things fresh, demanding skill and adaptability. Expect tactical battles and challenging boss encounters amid atmospheric settings. Plus, the emphasis on timing and strategy encourages players to experiment with each character’s unique skills and weapons. 📋Why this game? Its focus on character variety and loot-driven progression aligns well with Diablo-style gameplay.Luminaria: Dark Echoes9. Mine & SlashSteam Deck status: Unknown ⚠️ Mine & Slash is a roguelite 3D dungeon crawler where you explore deep, magical dungeons, mining gold and gathering valuable resources. The deeper you go, the tougher the enemies become. Customize your hero by upgrading your loot and equipment, unlocking powerful skills to enhance your combat abilities. Engage in mini-quests and face off against various challenging foes. 📋Why this game? It offers a fresh mix of mining and dungeon crawling with RPG progression and skill unlocks, adding a unique twist to the Diablo-like genre.Mine & Slash10. Conquest DarkSteam Deck status: Playable 🟡 In a world crushed by undead armies and ancient horrors, you get to choose from one of three classes to fight the darkness. Each class has unique powers to master. Conquest Dark focuses on quick action and short "Ritual" runs where you build your character with powerful weapons and skills. Along the way, you explore dangerous places full of monsters and mini-bosses, find better gear, unlock new abilities to grow stronger, and survive the dark forces threatening the world. 📋Why this game? Its class variety, crafting, and combat style align well with Diablo’s gameplay style.Conquest Dark11. Netherworld CovenantSteam Deck status: Playable 🟡 Netherworld Covenant is a dark fantasy isometric action roguelike with hardcore, soulslike combat. You play as a lone survivor wielding the Nether Lantern, a cursed artifact that binds you to fallen souls. Combat is built around deliberate timing and tactical decisions. Parry, dodge, and counter with help from spectral allies like the Swordsman, Rogue, Ranger, or Guardian, each offering unique battlefield advantages and movement abilities. Venture through procedurally generated dungeons, fight corrupt legends, and uncover a world shaped by loss, ambition, and the remnants of broken pacts. 📋Why this game? It mixes precise combat, dark lore, and strategy in a way that will feel familiar to Diablo fans.Netherworld Covenant12. Into the Restless RuinsSteam Deck status: Verified ✅ Into the Restless Ruins is a roguelike deckbuilder where each card you play constructs the dungeon itself. Rooms, corridors, and special tiles expand the ruins, affecting how you battle and harvest resources. Combat plays out in auto-battle style as you explore the dungeon you’ve created, seeking Glimour and progressing toward the final encounter with The Warden. Characters from Scottish folklore offer upgrades and relics that shape your strategy, with additional lore and gameplay depth revealed across multiple runs. 📋Why this game? It creatively merges deckbuilding and dungeon crawling in an isometric roguelike, echoing Diablo’s core ideas of build variety, loot, and replayability.Into the Restless Ruins13. Kid Mystic: Enchanted EditionSteam Deck status: Unknown ⚠️ Originally released in 1999 and updated over the years, Kid Mystic is a colorful action RPG filled with spells, quirky humor, and dungeon crawling. The Enchanted Edition revamps the original with modern features like skill trees, new levels, and upgraded mechanics. Players cast spells, solve puzzles, and fight bosses across multiple gameplay modes. These include the original 1999 version, the expanded 2004 edition, and the new Modern Mode with updated mechanics and challenge options like Brutal Mode. 📋Why this game? It mixes nostalgic action RPG combat with layered progression systems and dungeon crawling.Kid Mystic: Enchanted Edition14. Lost in Random: The Eternal DieSteam Deck status: Verified ✅ Lost in Random: The Eternal Die is a standalone roguelike dungeon crawler set in the gothic fantasy world of Random. Players take on the role of Queen Aleksandra, fighting for vengeance and redemption using fast-paced combat combined with strategic dice mechanics. Each run offers different weapon and relic combinations, with over 100 relics and 15 card-based powers to customize builds. Players use their sentient die-companion, Fortune, in real-time tactical battles that unfold across four dynamically generated biomes. 📋Why this game? It mixes dark fantasy, dice-driven chaos, and real-time isometric action, offering a creative spin on Diablo-style roguelikes.Lost in Random: The Eternal DieMore?I understand that there are probably more such games out there that you would like to see on this list. Why not mention it in the comment? Enjoy the summer holidays with these games.
  19. by: Mészáros Róbert Wed, 30 Jul 2025 13:21:05 +0000 After four years, the demos in my “Headless Form Submission with the WordPress REST API” article finally stopped working. The article includes CodePen embeds that demonstrate how to use the REST API endpoints of popular WordPress form plugins to capture and display validation errors and submission feedback when building a completely custom front-end. The pens relied on a WordPress site I had running in the background. But during a forced infrastructure migration, the site failed to transfer properly, and, even worse, I lost access to my account. Sure, I could have contacted support or restored a backup elsewhere. But the situation made me wonder: what if this had not been WordPress? What if it were a third-party service I couldn’t self-host or fix? Is there a way to build demos that do not break when the services they rely on fail? How can we ensure educational demos stay available for as long as possible? Or is this just inevitable? Are demos, like everything else on the web, doomed to break eventually? Parallels with software testing Those who write tests for their code have long wrestled with similar questions, though framed differently. At the core, the issue is the same. Dependencies, especially third-party ones, become hurdles because they are outside the bounds of control. Not surprisingly, the most reliable way to eliminate issues stemming from external dependencies is to remove the external service entirely from the equation, effectively decoupling from it. Of course, how this is done, and whether it’s always possible, depends on the context. As it happens, techniques for handling dependencies can be just as useful when it comes to making demos more resilient. To keep things concrete, I’ll be using the mentioned CodePen demos as an example. But the same approach works just as well in many other contexts. Decoupling REST API dependencies While there are many strategies and tricks, the two most common approaches to breaking reliance on a REST API are: Mocking the HTTP calls in code and, instead of performing real network requests, returning stubbed responses Using a mock API server as a stand-in for the real service and serving predefined responses in a similar manner Both have trade-offs, but let’s look at those later. Mocking a response with an interceptor Modern testing frameworks, whether for unit or end-to-end testing, such as Jest or Playwright, offer built-in mocking capabilities. However, we don’t necessarily need these, and we can’t use them in the pens anyway. Instead, we can monkey patch the Fetch API to intercept requests and return mock responses. With monkey patching, when changing the original source code isn’t feasible, we can introduce new behavior by overwriting existing functions. Implementing it looks like this: const fetchWPFormsRestApiInterceptor = (fetch) => async ( resource, options = {} ) => { // To make sure we are dealing with the data we expect if (typeof resource !== "string" || !(options.body instanceof FormData)) { return fetch(resource, options); } if (resource.match(/wp-json\/contact-form-7/)) { return contactForm7Response(options.body); } if (resource.match(/wp-json\/gf/)) { return gravityFormsResponse(options.body); } return fetch(resource, options); }; window.fetch = fetchWPFormsRestApiInterceptor(window.fetch); We override the default fetch with our own version that adds custom logic for specific conditions, and otherwise lets requests pass through unchanged. The replacement function, fetchWPFormsRestApiInterceptor, acts like an interceptor. An interceptor is simply a pattern that modifies requests or responses based on certain conditions. Many HTTP libraries, like the once-popular axios, offer a convenient API to add interceptors without resorting to monkey patching, which should be used sparingly. It’s all too easy to introduce subtle bugs unintentionally or create conflicts when managing multiple overrides. With the interceptor in place, returning a fake response is as simple as calling the static JSON method of the Response object: const contactForm7Response = (formData) => { const body = {} return Response.json(body); }; Depending on the need, the response can be anything from plain text to a Blob or ArrayBuffer. It’s also possible to specify custom status codes and include additional headers. For the CodePen demo, the response might be built like this: const contactForm7Response = (formData) => { const submissionSuccess = { into: "#", status: "mail_sent", message: "Thank you for your message. It has been sent.!", posted_data_hash: "d52f9f9de995287195409fe6dcde0c50" }; const submissionValidationFailed = { into: "#", status: "validation_failed", message: "One or more fields have an error. Please check and try again.", posted_data_hash: "", invalid_fields: [] }; if (!formData.get("somebodys-name")) { submissionValidationFailed.invalid_fields.push({ into: "span.wpcf7-form-control-wrap.somebodys-name", message: "This field is required.", idref: null, error_id: "-ve-somebodys-name" }); } // Or a more thorough way to check the validity of an email address if (!/^[^\s@]+@[^\s@]+\.[^\s@]+$/.test(formData.get("any-email"))) { submissionValidationFailed.invalid_fields.push({ into: "span.wpcf7-form-control-wrap.any-email", message: "The email address entered is invalid.", idref: null, error_id: "-ve-any-email" }); } // The rest of the validations... const body = !submissionValidationFailed.invalid_fields.length ? submissionSuccess : submissionValidationFailed; return Response.json(body); }; At this point, any fetch call to a URL matching wp-json/contact-form-7 returns the faked success or validation errors, depending on the form input. Now let’s contrast that with the mocked API server approach. Mocked API server with serverless Running a traditionally hosted mock API server reintroduces concerns around availability, maintenance, and cost. Even though the hype around serverless functions has quieted, we can sidestep these issues by using them. And with DigitalOcean Functions offering a generous free tier, creating mocked APIs is practically free and requires no more effort than manually mocking them. For simple use cases, everything can be done through the Functions control panel, including writing the code in the built-in editor. Check out this concise presentation video to see it in action: For more complex needs, functions can be developed locally and deployed using doctl (DigitalOcean’s CLI). To return the mocked response, it’s easier if we create a separate Function for each endpoint, since we can avoid adding unnecessary conditions. Fortunately, we can stick with JavaScript (Node.js), and starting with nearly the same base we used for contactForm7Response: function main(event) { const body = {}; return { body }; } We must name the handler function main, which is invoked when the endpoint is called. The function receives the event object as its first argument, containing the details of the request. Once again, we could return anything, but to return the JSON response we need, it’s enough to simply return an object. We can reuse the same code for creating the response as-is. The only difference is that we have to extract the form input data from the event as FormData ourselves: function main(event) { // How do we get the FormData from the event? const formData = new FormData(); const submissionSuccess = { // ... }; const submissionValidationFailed = { // ... }; if (!formData.get("somebodys-name")) { submissionValidationFailed.invalid_fields.push({ // ... }); } // Or a more thorough way to check the validity of an email address if (!/^[^\s@]+@[^\s@]+\.[^\s@]+$/.test(formData.get("any-email"))) { submissionValidationFailed.invalid_fields.push({ // ... }); } // The rest of the validations... const body = !submissionValidationFailed.invalid_fields.length ? submissionSuccess : submissionValidationFailed; return { body }; } As far as converting the data, serverless functions typically expect JSON inputs, so for other data types an extra parsing step is required. As it happens, the forms in the CodePen demos are submitted as multipart/form-data. Without any libraries, we can convert a multipart/form-data string into a FormData by taking advantage of the Response API’s capabilities: async function convertMultipartFormDataToFormData(data) { const matches = data.match(/^\s*--(\S+)/); if (!matches) { return new FormData(); } const boundary = matches[1]; return new Response(data, { headers: { "Content-Type": `multipart/form-data; boundary=${boundary}` } }).formData(); } The code is mostly focused on extracting the boundary variable. This is typically autogenerated, for example, when submitting a form in a browser. The submitted raw data is available via event.http.body, but since it’s base64-encoded, we need to decode it first: async function main(event) { const formData = await convertMultipartFormDataToFormData( Buffer.from(event?.http?.body ?? "", "base64").toString("utf8") ); // ... const body = !submissionValidationFailed.invalid_fields.length ? submissionSuccess : submissionValidationFailed; return { body }; } And that’s it. With this approach, all that’s left is to replace calls to the original APIs with calls to the mocked ones. Closing thoughts Ultimately, both approaches help decouple the demos from the third-party API dependency. In terms of effort, at least for this specific example, they seem comparable. It’s hard to beat the fact that there’s no external dependency with the manual mocking approach, not even on something we somewhat control, and everything is bundled together. In general, without knowing specific details, there are good reasons to favor this approach for small, self-contained demos. But using a mocked server API also has its advantages. A mocked server API can power not only demos, but also various types of tests. For more complex needs, a dedicated team working on the mocked server might prefer a different programming language than JavaScript, or they might opt to use a tool like WireMock instead of starting from scratch. As with everything, it depends. There are many criteria to consider beyond what I’ve just mentioned. I also don’t think this approach necessarily needs to be applied by default. After all, I had the CodePen demos working for four years without any issues. The important part is having a way to know when demos break (monitoring), and when they do, having the right tools at our disposal to handle the situation. Keeping Article Demos Alive When Third-Party APIs Die originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  20. by: Zell Liew Mon, 28 Jul 2025 12:42:16 +0000 Many CSS experts have weighed heavily on possible syntaxes for a new masonry layout feature last year. There were two main camps and a third camp that strikes a balance between the two: Use display: masonry Use grid-template-rows: masonry Use item-pack: collapse I don’t think they’ve came up with a resolution yet. But you might want to know that Firefox already supports the masonry layout with the second syntax. And Chrome is testing it with the first syntax. While it’s cool to see native support for CSS Masonry evolving, we can’t really use it in production if other browsers don’t support the same implementation… So, instead of adding my voice to one of those camps, I went on to figure out how make masonry work today with other browsers. I’m happy to report I’ve found a way — and, bonus! — that support can be provided with only 66 lines of JavaScript. In this article, I’m gonna show you how it works. But first, here’s a demo for you to play with, just to prove that I’m not spewing nonsense. Note that there’s gonna be a slight delay since we’re waiting for an image to load first. If you’re placing a masonry at the top fold, consider skipping including images because of this! Anyway, here’s the demo: CodePen Embed Fallback What in the magic is this?! Now, there are a ton of things I’ve included in this demo, even though there are only 66 lines of JavaScript: You can define the masonry with any number of columns. Each item can span multiple columns. We wait for media to load before calculating the size of each item. We made it responsive by listening to changes with the ResizeObserver. These make my implementation incredibly robust and ready for production use, while also way more flexible than many Flexbox masonry knockoffs out there on the interwebs. Now, a hot tip. If you combine this with Tailwind’s responsive variants and arbitrary values, you can include even more flexibility into this masonry grid without writing more CSS. Okay, before you get hyped up any further, let’s come back to the main question: How the heck does this work? Let’s start with a polyfill Firefox already supports masonry layouts via the second camp’s syntax. Here’s the CSS you need to create a CSS masonry grid layout in Firefox. .masonry { display: grid; grid-template-columns: repeat( auto-fit, minmax(min(var(--item-width, 200px), 100%), 1fr) ); grid-template-rows: masonry; grid-auto-flow: dense; /* Optional, but recommended */ } Since Firefox already has native masonry support, naturally we shouldn’t mess around with it. The best way to check if masonry is supported by default is to check if grid-template-rows can hold the masonry value. function isMasonrySupported(container) { return getComputedStyle(container).gridTemplateRows === 'masonry' } If masonry is supported, we’ll skip our implementation. Otherwise, we’ll do something about it. const containers = document.querySelectorAll('.masonry') containers.forEach(async container => { if (isMasonrySupported(container)) return }) Masonry layout made simple Now, I want to preface this segment that I’m not the one who invented this technique. I figured out this technique when I was digging through the web, searching for possible ways to implement a masonry grid today. So kudos goes to the unknown developer who developed the idea first — and perhaps me for understanding, converting, and using it. The technique goes like this: We set grid-auto-rows to 0px. Then we set row-gap to 1px. Then we get the item’s height through getBoundingClientRect. We then size the item’s “row allocation” by adding the height the column-gap value together. This is really unintuitive if you’ve been using CSS Grid the standard way. But once you get this, you can also grasp how this works! Now, because this is so unintuitive, we’re gonna take things step-by-step so you see how this whole thing evolves into the final output. Step by step First, we set grid-auto-rows to 0px. This is whacky because every grid item will effectively have “zero height”. Yet, at the same time, CSS Grid maintains the order of the columns and rows! containers.forEach(async container => { // ... container.style.gridAutoRows = '0px' }) Second, we set row-gap to 1px. Once we do this, you begin to notice an initial stacking of the rows, each one one pixel below the previous one. containers.forEach(async container => { // ... container.style.gridAutoRows = '0px' container.style.setProperty('row-gap', '1px', 'important') }) Third, assuming there are no images or other media elements in the grid items, we can easily get the height of each grid item with getBoundingClientRect. We can then restore the “height” of the grid item in CSS Grid by substituting grow-row-end with the height value. This works because each row-gap is now 1px tall. When we do this, you can see the grid beginning to take shape. Each item is now (kinda) back at their respective positions: containers.forEach(async container => { // ... let items = container.children layout({ items }) }) function layout({ items }) { items.forEach(item => { const ib = item.getBoundingClientRect() item.style.gridRowEnd = `span ${Math.round(ib.height)}` }) } We now need to restore the row gap between items. Thankfully, since masonry grids usually have the same column-gap and row-gap values, we can grab the desired row gap by reading column-gap values. Once we do that, we add it to grid-row-end to expand the number of rows (the “height”) taken up by the item in the grid: containers.forEach(async container => { // ... const items = container.children const colGap = parseFloat(getComputedStyle(container).columnGap) layout({ items, colGap }) }) function layout({ items, colGap }) { items.forEach(item => { const ib = item.getBoundingClientRect() item.style.gridRowEnd = `span ${Math.round(ib.height + colGap)}` }) } And, just like that, we’ve made the masonry grid! Everything from here on is simply to make this ready for production. Waiting for media to load Try adding an image to any grid item and you’ll notice that the grid breaks. That’s because the item’s height will be “wrong”. It’s wrong because we took the height value before the image was properly loaded. The DOM doesn’t know the dimensions of the image yet. To fix this, we need to wait for the media to load before running the layout function. We can do this with the following code (which I shall not explain since this is not much of a CSS trick 😅): containers.forEach(async container => { // ... try { await Promise.all([areImagesLoaded(container), areVideosLoaded(container)]) } catch(e) {} // Run the layout function after images are loaded layout({ items, colGap }) }) // Checks if images are loaded async function areImagesLoaded(container) { const images = Array.from(container.querySelectorAll('img')) const promises = images.map(img => { return new Promise((resolve, reject) => { if (img.complete) return resolve() img.onload = resolve img.onerror = reject }) }) return Promise.all(promises) } // Checks if videos are loaded function areVideosLoaded(container) { const videos = Array.from(container.querySelectorAll('video')) const promises = videos.map(video => { return new Promise((resolve, reject) => { if (video.readyState === 4) return resolve() video.onloadedmetadata = resolve video.onerror = reject }) }) return Promise.all(promises) } Voilà, we have a CSS masnory grid that works with images and videos! Making it responsive This is a simple step. We only need to use the ResizeObserver API to listen for any change in dimensions of the masonry grid container. When there’s a change, we run the layout function again: containers.forEach(async container => { // ... const observer = new ResizeObserver(observerFn) observer.observe(container) function observerFn(entries) { for (const entry of entries) { layout({colGap, items}) } } }) This demo uses the standard Resize Observer API. But you can make it simpler by using the refined resizeObserver function we built the other day. containers.forEach(async container => { // ... const observer = resizeObserver(container, { callback () { layout({colGap, items}) } }) }) That’s pretty much it! You now have a robust masonry grid that you can use in every working browser that supports CSS Grid! Exciting, isn’t it? This implementation is so simple to use! Masonry grid with Splendid Labz If you’re not adverse to using code built by others, maybe you might want to consider grabbing the one I’ve built for you in Splendid Labz. To do that, install the helper library and add the necessary code: # Installing the library npm install @splendidlabz/styles /* Import all layouts code */ @import '@splendidlabz/layouts'; // Use the masonry script import { masonry } from '@splendidlabz/styles/scripts' masonry() One last thing: I’ve been building a ton of tools to help make web development much easier for you and me. I’ve parked them all under the Splendid Labz brand — and one of these examples is this masonry grid I showed you today. If you love this, you might be interested in other layout utilities that makes layout super simple to build. Now, I hope you have enjoyed this article today. Go unleash your new CSS masonry grid if you wish to, and all the best! Making a Masonry Layout That Works Today originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  21. by: Chris Coyier Sun, 27 Jul 2025 15:27:51 +0000 Chris & Rachel hop on the show to talk about the expanded privacy (access) model in the 2.0 editor (in Private Beta as we speak). Private Pens have always been a big deal, but as private as they are, if someone has the URL, they have the URL, and it doesn’t always feel very private. There are two new levels of privacy in the 2.0 editor: password protected and collaborators only. Passwords are an obvious choice we probably should have done long ago. With it, both the Pen in the editor itself, as well as the potentially deployed site are password protected. Our new permissions model is intertwined in this. Now you can invite others directly to be a fellow Editor or simply a Viewer to an otherwise private Pen. If you set the privacy level to “collaborators only”, that’s the most private a Pen can possibly be. Time Jumps 00:07 We’re back – Rach edition! 01:46 Permissions and privacy 05:35 Building a password feature for pens 10:12 Invite people to edit or view a pen 13:13 Collaborator level access 16:29 Viewer and editor options 19:52 Needing to build a dashboard to handle invites 27:46 Dealing with edge cases
  22. by: Abhishek Prakash Sun, 27 Jul 2025 18:38:45 +0530 /* Catppuccin Mocha color palette */ :root { --ctp-base: #1e1e2e; --ctp-mantle: #181825; --ctp-crust: #11111b; --ctp-surface0: #313244; --ctp-surface1: #45475a; --ctp-surface2: #585b70; --ctp-text: #cdd6f4; --ctp-subtext1: #bac2de; --ctp-subtext0: #a6adc8; --ctp-blue: #89b4fa; --ctp-lavender: #b4befe; --ctp-green: #a6e3a1; --ctp-red: #f38ba8; --ctp-yellow: #f9e2af; } * { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: 'Inter', -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif; background: var(--ctp-base); color: var(--ctp-text); min-height: 100vh; display: flex; flex-direction: column; align-items: center; padding: 2rem 1rem; line-height: 1.6; } .container { max-width: 800px; width: 100%; background: var(--ctp-mantle); border-radius: 12px; border: 1px solid var(--ctp-surface0); padding: 2rem; box-shadow: 0 8px 32px rgba(0, 0, 0, 0.3); } .header { text-align: center; margin-bottom: 2rem; } .header h2 { color: var(--ctp-blue); font-size: 1.9rem; font-weight: 800; margin-bottom: 0.5rem; } .header p { color: var(--ctp-subtext0); font-size: 1rem; } .converter-section { margin-bottom: 2rem; } .input-group { margin-bottom: 1.5rem; } label { display: block; color: var(--ctp-lavender); font-weight: 600; margin-bottom: 0.5rem; font-size: 0.9rem; } textarea { width: 100%; min-height: 120px; background: var(--ctp-surface0); border: 1px solid var(--ctp-surface1); border-radius: 8px; padding: 1rem; color: var(--ctp-text); font-family: 'JetBrains Mono', 'Fira Code', monospace; font-size: 0.9rem; resize: vertical; transition: all 0.2s ease; } textarea:focus { outline: none; border-color: var(--ctp-blue); box-shadow: 0 0 0 3px rgba(137, 180, 250, 0.1); } textarea::placeholder { color: var(--ctp-subtext0); } .button-group { display: flex; gap: 1rem; margin-bottom: 1.5rem; flex-wrap: wrap; } button { background: var(--ctp-blue); color: var(--ctp-base); border: none; padding: 0.75rem 1.5rem; border-radius: 8px; font-weight: 600; cursor: pointer; transition: all 0.2s ease; font-size: 0.9rem; } button:hover { background: var(--ctp-lavender); transform: translateY(-1px); } .clear-btn { background: var(--ctp-surface1); color: var(--ctp-text); } .clear-btn:hover { background: var(--ctp-surface2); } .copy-btn { background: var(--ctp-green); } .copy-btn:hover { background: #94e2d5; } .error { background: var(--ctp-red); color: var(--ctp-base); padding: 0.75rem 1rem; border-radius: 8px; margin-bottom: 1rem; font-size: 0.9rem; display: none; } .format-info { background: var(--ctp-surface0); border-left: 4px solid var(--ctp-yellow); padding: 1rem; margin-top: 2rem; border-radius: 0 8px 8px 0; } .format-info h3 { color: var(--ctp-yellow); margin-bottom: 0.5rem; font-size: 1rem; } .format-info p { color: var(--ctp-subtext1); font-size: 0.9rem; margin-bottom: 0.5rem; } .format-examples { font-family: 'JetBrains Mono', monospace; color: var(--ctp-text); font-size: 0.8rem; background: var(--ctp-surface1); padding: 0.5rem; border-radius: 4px; margin-top: 0.5rem; } .footer { text-align: center; margin-top: 2rem; color: var(--ctp-subtext0); font-size: 0.8rem; } .footer a { color: var(--ctp-blue); text-decoration: none; } .footer a:hover { text-decoration: underline; } @media (max-width: 600px) { .container { padding: 1.5rem; } .header h1 { font-size: 1.5rem; } .button-group { flex-direction: column; } button { width: 100%; } } Hex to ASCII Converter Convert hexadecimal values to readable ASCII text Hexadecimal Input Convert Clear All Copy Result ASCII Output Supported Formats This tool accepts hexadecimal input in various formats: • Continuous: 48656c6c6f20576f726c64 • Space-separated: 48 65 6c 6c 6f 20 57 6f 72 6c 64 • With 0x prefix: 0x48 0x65 0x6c 0x6c 0x6f • Mixed case: 48656C6C6F20576F726C64 Made by Linux Handbook The Hex to ASCII conversion processConverting hexadecimal to ASCII involves these steps: Parse the hex string: Remove any formatting (spaces, 0x prefixes)Validate hex characters: Ensure only 0-9, A-F characters are presentGroup into bytes: Each pair of hex digits represents one byte (8 bits)Convert to decimal: Each hex byte becomes a decimal value (0-255)Map to ASCII: Use the decimal value to find the corresponding ASCII characterLet's convert the hex string 48656C6C6F to ASCII: 48 → 72 → 'H' 65 → 101 → 'e' 6C → 108 → 'l' 6C → 108 → 'l' 6F → 111 → 'o' Result: "Hello" You don't have to rely on online hex to ascii converters. Linux has multiple commands to handle it right there in the terminal. Convert Hex to ASCII Characters in Linux Bash ShellHere are various ways for converting Hex to ASCII characters in the Linux command line and bash scripts.Linux HandbookTeam LHBDon't know the basics of hexadecimal or ASCII? Let me help you with that. What is Hexadecimal?Hexadecimal (often shortened to "hex") is a base-16 number system that uses 16 distinct symbols to represent values. Unlike our familiar decimal system (base-10) that uses digits 0-9, hexadecimal uses digits 0-9 and letters A-F to represent values from 0 to 15. Why use Hexadecimal?Hexadecimal is widely used in computing because it provides a more human-readable way to represent binary data: Compact representation: Each hex digit represents exactly 4 bits (binary digits)Easy conversion: Converting between hex and binary is straightforwardMemory addresses: System memory addresses are typically displayed in hexColor codes: Web colors use hex notation (#FF0000 for red)File formats: Many file headers and data structures use hex valuesHexadecimal Digit Mapping Decimal Hexadecimal Binary 0 0 0000 1 1 0001 2 2 0010 3 3 0011 4 4 0100 5 5 0101 6 6 0110 7 7 0111 8 8 1000 9 9 1001 10 A 1010 11 B 1011 12 C 1100 13 D 1101 14 E 1110 15 F 1111 Understanding ASCIIASCII (American Standard Code for Information Interchange) is a character encoding standard that assigns unique numeric values to characters. Each ASCII character is represented by a 7-bit number (0-127), though extended ASCII uses 8 bits (0-255). ASCII Character CategoriesControl Characters (0-31): Non-printable characters used for text formatting and control 0 (NULL), 9 (TAB), 10 (Line Feed), 13 (Carriage Return)Printable Characters (32-126): Visible characters including letters, numbers, and symbols 32 (Space), 48-57 (0-9), 65-90 (A-Z), 97-122 (a-z)Extended ASCII (128-255): Additional characters for international languages and symbols Complete ASCII TableControl Characters (0-31) Dec Hex Char Description Dec Hex Char Description 0 00 NUL Null 16 10 DLE Data Link Escape 1 01 SOH Start of Header 17 11 DC1 Device Control 1 2 02 STX Start of Text 18 12 DC2 Device Control 2 3 03 ETX End of Text 19 13 DC3 Device Control 3 4 04 EOT End of Trans. 20 14 DC4 Device Control 4 5 05 ENQ Enquiry 21 15 NAK Negative Ack 6 06 ACK Acknowledge 22 16 SYN Synchronous Idle 7 07 BEL Bell 23 17 ETB End of Block 8 08 BS Backspace 24 18 CAN Cancel 9 09 TAB Horizontal Tab 25 19 EM End of Medium 10 0A LF Line Feed 26 1A SUB Substitute 11 0B VT Vertical Tab 27 1B ESC Escape 12 0C FF Form Feed 28 1C FS File Separator 13 0D CR Carriage Return 29 1D GS Group Separator 14 0E SO Shift Out 30 1E RS Record Separator 15 0F SI Shift In 31 1F US Unit Separator Printable Characters (32-126) Dec Hex Char Dec Hex Char Dec Hex Char Dec Hex Char 32 20 ␣ 56 38 8 80 50 P 104 68 h 33 21 ! 57 39 9 81 51 Q 105 69 i 34 22 " 58 3A : 82 52 R 106 6A j 35 23 # 59 3B ; 83 53 S 107 6B k 36 24 $ 60 3C < 84 54 T 108 6C l 37 25 % 61 3D = 85 55 U 109 6D m 38 26 & 62 3E > 86 56 V 110 6E n 39 27 ' 63 3F ? 87 57 W 111 6F o 40 28 ( 64 40 @ 88 58 X 112 70 p 41 29 ) 65 41 A 89 59 Y 113 71 q 42 2A * 66 42 B 90 5A Z 114 72 r 43 2B + 67 43 C 91 5B [ 115 73 s 44 2C , 68 44 D 92 5C \ 116 74 t 45 2D - 69 45 E 93 5D ] 117 75 u 46 2E . 70 46 F 94 5E ^ 118 76 v 47 2F / 71 47 G 95 5F _ 119 77 w 48 30 0 72 48 H 96 60 ` 120 78 x 49 31 1 73 49 I 97 61 a 121 79 y 50 32 2 74 4A J 98 62 b 122 7A z 51 33 3 75 4B K 99 63 c 123 7B { 52 34 4 76 4C L 100 64 d 124 7C | 53 35 5 77 4D M 101 65 e 125 7D } 54 36 6 78 4E N 102 66 f 126 7E ~ 55 37 7 79 4F O 103 67 g 127 7F DEL Common use cases of hex in LinuxSystem AdministrationLog analysis: Many log files contain hex-encoded dataNetwork debugging: Packet captures often show hex dumpsFile analysis: Examining binary files and headersMemory dumps: Analyzing core dumps and memory contentsDevelopmentDebugging: Understanding binary data formatsReverse engineering: Analyzing compiled programsData recovery: Extracting readable text from corrupted filesProtocol analysis: Decoding network protocolsSecurityForensics: Examining evidence in hex formatMalware analysis: Understanding suspicious binary dataCryptography: Working with encoded messagesHash verification: Converting hash values to readable formatCommon issues and pitfalls while converting hex to ASCII1. Incorrect byte ordering (Endianness)Problem: Multi-byte values may be stored in different byte orders Little-endian: Least significant byte first (x86/x64 systems)Big-endian: Most significant byte first (network protocols)Example: Hex: 41424344 Little-endian interpretation: DCBA Big-endian interpretation: ABCD Solution: Understand the source system's byte order before conversion. 2. Character encoding confusionProblem: Not all text data uses ASCII encoding Common encodings: UTF-8: Variable-length encoding (1-4 bytes per character)UTF-16: 16-bit encoding with potential surrogate pairsLatin-1/ISO-8859-1: 8-bit encoding for Western European languagesSolution: Identify the correct character encoding before conversion. 3. Non-printable charactersProblem: ASCII values 0-31 and 127 are control characters that don't display Examples: 00 (NULL) - String terminator in C0A (Line Feed) - Newline in Unix0D (Carriage Return) - Part of Windows newline1B (Escape) - Terminal escape sequencesSolution: Handle control characters appropriately in your application context. 4. Invalid hex inputCommon input errors: Odd number of characters: 48656C6C6 (missing final digit)Invalid characters: 48G56C6C6F (G is not valid hex)Mixed case inconsistency: Usually not a problem, but be awareSolution: Validate and sanitize input before conversion. 5. Memory and performance issuesProblem: Large hex strings can consume significant memory Example: A 1MB binary file becomes a 2MB hex string Solutions: Process data in chunks for large filesUse streaming conversion for real-time processingConsider memory-mapped files for very large datasets6. Whitespace and formatting variationsCommon formats you might encounter: Continuous: 48656C6C6F Space-separated: 48 65 6C 6C 6F Comma-separated: 48,65,6C,6C,6F With prefixes: 0x48 0x65 0x6C 0x6C 0x6F Hex dump format: 00000000: 48 65 6C 6C 6F Solution: Use flexible parsing that handles multiple formats. 7. Context-specific interpretationsProblem: The same hex data might represent different things depending on context Examples: FF could be: 255 (decimal), -1 (signed byte), or 377 (octal)0A could be: newline character or decimal 1020 could be: space character or decimal 32Solution: Always consider the data format and intended use case. Best PracticesFor DevelopersAlways validate input before attempting conversionHandle errors gracefully with clear error messagesDocument expected input formats in your applicationsTest with edge cases including empty strings and invalid inputConsider character encoding beyond basic ASCII when neededFor System AdministratorsUnderstand your data source before conversionUse appropriate tools for different hex formatsVerify results with known good data when possibleKeep backups when working with important dataDocument your conversion process for future referenceFor Security ProfessionalsBe aware of obfuscation techniques that might use hex encodingUnderstand that hex conversion can reveal sensitive dataUse secure tools that don't log or cache sensitive informationVerify data integrity after conversion processesConsider context when analyzing hex-encoded evidenceThis comprehensive understanding of hexadecimal and ASCII conversion will help you effectively use conversion tools and troubleshoot common issues in Linux environments.
  23. by: Sourav Rudra Sat, 26 Jul 2025 13:47:27 GMT Wireless file transfers are incredibly convenient, especially between Linux and Android devices. No cables, no manual configuration needed. Just quick transfers from one device to another using your local network. I know that it is faster to transfer files, specially huge ones, via cable. But if your library has thousands of photos and videos, it takes several minutes to load them. When you want to share just a few selected photos, it is easier to select on your phone and share them. Now, instead of uploading the selected files to cloud servers or sending them via WhatsApp, open source alternatives offer a more direct and private approach to file transfers with no third party involved. These tools are not only safer and faster but often more reliable than the aforementioned options. Let me share a few such tools you can use for transferring files between Linux and your Android smartphones. 1. Packet: No non-sense, simple transferI begin this list with Packet, an app that makes transferring files effortless with its user interface and partial Quick Share implementation. With this, Android devices can easily connect to your Linux device over a wireless network. It is designed with GNOME users in mind, offering a clean GTK interface and optional Nautilus integration. Once set up, your Linux machine should automatically appear in the device list when sharing files from Android and vice versa. When I tested it, never did it fail a transfer, and each session completed quickly without the need for any manual IP configuration or pairing. ⭐ Key Features Can be integrated with NautilusWorks with Android Quick ShareLocal network transfers with no internet dependencyPacketIn case you are interested, here's my full experience of using Packet app. Packet is the Linux App You Didn’t Know You Needed for Fast Android File TransfersSimple, fast file sharing between Linux and Android.It's FOSSSourav Rudra2. KDE Connect: Your phone companionKDE Connect is a great fit for Linux distros that come with KDE Plasma, but it’s also usable on others. What’s cool is that it comes packed with features like file transfer, battery info, clipboard sharing, and more. You can turn each feature on or off so you’re not stuck with stuff you don’t need. I have found that file transfers are pretty solid most of the time. Plus, it’s more than just for sending files. I can control music on my phone from my computer, get phone notifications right on my desktop, and even use my phone as a remote mouse or keyboard. ⭐ Key Features Phone battery level display on desktopSupports SMS messaging from desktopAbility to run commands remotely on your Linux computerKDE Connect3. Syncthing: Sync (every) thingSyncthing is a powerful open source tool for syncing files across devices over your local network or the internet. It has a web-based interface that lets you manage your sync folders and devices from any browser. Though, it is better suited for users who know their way around their Linux computer. To sync files with Android, you will need a third-party app like Syncthing-Fork. Since Syncthing syncs entire folders rather than single files, I suggest creating a dedicated folder that stays empty until you drop files for transfer into it. When configured correctly, transfers are fast and reliable, and the detailed sync status and logs help you keep track of what’s happening during syncing. ⭐ Key Features Syncs full folders with continuous updatesRequires third-party Android app for syncingWeb GUI for easy device and folder managementSyncthing4. LocalSend: Alternative to AirDrop LocalSend is an open source app built for seamless, encrypted file sharing across devices over the same local network. It supports Linux, Android, Windows, macOS, and iOS, making it one of the most versatile cross-platform tools in this list. You get a clean and user-friendly interface here, where devices are assigned randomized names to make them easy to identify before sending files. I use LocalSend in my workflow to quickly transfer documents and images from my phone to my computer. The only minor inconvenience is that I need to disable the VPN on both devices for them to detect each other in the app. ⭐ Key Features Broad cross-platform supportNo internet, no tracking, no adsEnd-to-end encrypted file transfersLocalSendLocalSend: An Open-Source AirDrop Alternative For Everyone!It’s time to ditch platform-specific solutions like AirDrop!It's FOSS NewsSourav Rudra5. Warpinator: File sharing tool from Linux Mint teamDeveloped by the Linux Mint team, Warpinator is an open source tool for easy file transfers over local networks. It’s simple to use but has some quirks. For example, to unlock all features, you need to enable Secure Mode. Without this, the app automatically exits after 60 minutes. On Android, I had to rely on a third-party Warpinator client since there isn’t an official app yet. Connecting new devices can sometimes be a bit finnicky, requiring some troubleshooting before everything works smoothly, so keep that in mind. ⭐ Key Features Automatic device discovery over local networkSimultaneous transfers with optional data compressionManual connections for restricted network environmentsWarpinatorBonus: GSConnect - GNOME's Unofficial KDEConnect CousinGSConnect is a shell extension based on KDE Connect that is built specifically for GNOME desktops. It works well with the KDE Connect Android app, providing most of the same features without needing to switch desktop environments. I really like the multimedia controls because I can control any music or video playing on my Linux setup right from my phone. File transfers usually go smoothly, but occasionally when I send multiple files from Linux to Android, only one file is received and the rest vanish. Other than this issue, GSConnect handles notifications, clipboard sharing, and more without problems. ⭐ Key Features Handy multimedia controls and notificationsFull KDE Connect features on GNOME desktopsBrowser extension support for Chrome and FirefoxGSConnectSeamlessly Connect Your Android Phone and Linux Using GSConnectLet’s improve the ‘relation’ between your Linux computer and the Android smartphone.It's FOSSSreenathConclusionAs you can see here, Packet and LocalSend are straight forward tools designed to primarily share files between your phone and Linux desktop. Warpinator can be used for sharing files between different desktop computers and operating systems, too. KDEConnect and GSConnect are more feature rich as they integrate some of your smartphone features with your desktop. Syncthing is a versatile, P2P file synchronization tool. Now, that I have presented these options, it is up to you to explore and decide the tool you would like to use for sharing files from your phone to your Linux system.
  24. by: Lee Meyer Fri, 25 Jul 2025 13:48:36 +0000 Do we invent or discover CSS tricks? Michelangelo described his sculpting process as chiseling away superfluous material to reveal the sculpture hidden inside the marble, and Stephen King says his ideas are pre-existing things he locates and uncovers “like fossils in the ground.” Paragraph one is early for me to get pretentious enough to liken myself to those iconic creative forces, but my work on CSS-Tricks feels like “discovering,” not “inventing,” secret synergies between CSS features, which have been eyeing each other from disparate sections of the MDN web docs and patiently waiting for someone to let them dance together in front of the world. Matchmaking for CSS features A strategy for finding unexpected alliances between CSS features to achieve the impossible is recursive thinking, which I bring to the CSS world from my engineering background. When you build recursive logic, you need to find an escape hatch to avoid infinite recursion, and this inception-style mindset helps me identify pairings of CSS features that seem at odds with each other yet work together surprisingly well. Take these examples from my CSS experiments: What if view-timeline took control of the thing that triggers view-timeline? This led to a pairing between view-timeline and position: fixed. These two features are like a bickering yet symbiotic “odd couple” at the heart of my web-slinger.css library for scroll-triggered animations in pure CSS. What if keyframe animations could trigger other keyframe animations? This idea led to a throuple comprised of keyframe animations, style queries, and animation-play-state, which together can simulate collision detection in CSS. What if scroll-state:(scrollable: value) could control which directions are scrollable? That question led to a scrollytelling version of a “Choose Your Own Adventure,” which — wait, I haven’t published that one yet, but when I do, try to look surprised. Accepting there is nothing new under the sun Indeed, Mark Twain thought new ideas don’t exist — he described them as illusions we create by combining ideas that have always existed, turning and overlaying them in a “mental kaleidoscope” to “make new and curious combinations.” It doesn’t mean creating is easy. No more than a safe can be cracked just by knowing the possible digits. This brings back memories of playing Space Quest III as a kid because after you quit the game, it would output smart-aleck command-line messages, one of which was: “Remember, we did it all with ones and zeros.” Perhaps the point of the mock inspirational tone is that we likely will not be able to sculpt like Michelangelo or make a bestselling game, even if we were given the same materials and tools (is this an inspirational piece or what?). However, understanding the limits of what creators do is the foundation for cracking the combination of creativity to open the door to somewhere we haven’t been. And one truth that helps with achieving magic with CSS is that its constraints help breed creativity. Embracing limitations Being asked “Why would you do that in CSS when you could just use JavaScript?” is like if you asked me: “Why would you write a poem when it’s easier to write prose?” Samuel Coleridge defined prose as “words in their best order,” but poetry as “the best words in the best order.” If you think about it, the difference between prose and poetry is that the latter is based on increased constraints, which force us to find unexpected connections between ideas. Similarly, the artist Phil Hansen learned that embracing limitation could drive creativity after he suffered permanent nerve damage in his hand, causing it to jitter, which prevented him from drawing the way he had in the past. His early experiments using this new mindset included limiting himself to creating a work using only 80 cents’ worth of supplies. This dovetails with the quote from Antoine de Saint-Exupéry often cited in web design, which says that perfection is achieved when there is nothing left to take away. Embracing nothingness The interesting thing about web design is how much it blends art and science. In both art and science, we challenge assumptions about whether commonsense relationships of cause and effect truly exist. Contrary to the saying in vernacular that “you can’t prove a negative,” we can. It’s not necessarily harder than proving a positive. So, in keeping with the discussion above of embracing limitations and removing the superfluous until a creation reveals itself, many of my article ideas prove a negative by challenging the assumption that one thing is necessary to produce another. Maybe we don’t need JavaScript to produce a Sudoku solver, a Tinder-style swiper, or a classic scroll-driven animation demo. Maybe we don’t need checkbox hacks to make CSS games. Maybe we don’t need to hack CSS at all to recreate similar effects to what’s possible in browsers that support the CSS if() function. Maybe I can impart web dev wisdom on CSS-Tricks without including CSS at all, by sharing the “source code” of my thought process to help make you a better developer and a better person. Going to extremes Sometimes we can make a well-worn idea new again by taking it to the extreme. Seth Godin coined the term “edgecraft” to describe a technique for generating ideas by pushing a competitive advantage as far to the edge as the market dares us to go. Similarly, sometimes you can take an old CSS feature that people have seen before, but push it further than anyone else to create something unique. For example: CSS-Tricks covered checkbox hacks and radio button hacks back in 2011. But in 2021, I decided to see if I could use hundreds of radio button hacks using HTML generated with Pug to create a working Sudoku app. At one point, I found out that Chrome dev tools can display an infinite spinner of death when you throw too much generated CSS at it, which meant I had to limit myself to a 4×4 Sudoku, but that taught me more about what CSS can do and what it can’t. The :target selector has existed since the 2000s. But in 2024, I took it to the extreme by using HAML to render the thousands of possible states of Tic Tac Toe to create a game with a computer opponent in pure CSS. At one point, CodePen refused to output as much HTML as I had asked it to, but it’s a fun way for newcomers to learn an important CSS feature; more engaging in my opinion than a table of contents demo. Creating CSS outsider art Chris Coyier has written about his distaste for the gatekeeping agenda hidden behind the question of whether CSS is a programming language. If CSS isn’t deemed as “real” programming, that can be used as an excuse to hold CSS experts in less esteem than people who code in imperative languages, which leads to unfair pay and toxic workplace dynamics. But maybe the other side always seems greener due to the envy radiating from the people on that side, because as a full-stack engineer who completed a computer science degree, I always felt left out of the front-end conversations. It didn’t feel right to put “full stack developer” on my résumé when the creation of everything users can see in a web app seemed mysterious to me. And maybe it wasn’t just psychosomatic that CSS made my head hurt compared to other types of coding because research indicates if you do fMRIs on people who are engaged in design tasks, you see that design cognition appears to involve a unique cognitive profile compared to conventional problem-solving, reflected in the areas of the brain that light up on the fMRIs. Studies show that the brain’s structure changes as people get better at different types of jobs. The brain’s structural plasticity is reminiscent of the way different muscles grow more pronounced with different types of exercise, but achieving what some of my colleagues could with CSS when my brain had been trained for decades on imperative logic felt about as approachable as lifting a car over my head. The intimidation I felt from CSS started to change when I learned about the checkbox hack because I could relate to hiding and showing divs based on checkboxes, which was routine in my work in the back of the front-end. My designer workmate challenged me to make a game in one night using just CSS. I came up with a pure text adventure game made out of radio button hacks. Since creative and curious people are more sensitive to novel stimuli, the design experts on my team were enthralled by my primitive demo, not because it was cutting-edge gameplay but because it was something they had never seen before. My engineering background was now an asset rather than a hindrance in the unique outsider perspective I could bring to the world of CSS. I was hooked. The hack I found to rewire my brain to become more CSS-friendly was to find analogies in CSS to the type of problem-solving I was more familiar with from imperative programming: CSS custom properties are like reactive variables in Vue. The :target selector in CSS is like client-side routing in a single-page application. The min() and max() functions in CSS can be used to simulate some of the logical operations we take for granted in imperative programming. So if you are still learning web development and CSS (ultimately, we are all still learning), instead of feeling imposter syndrome, consider that the very thing that makes you feel like an outsider could be what enables you to bring something unique to your usage of CSS. Finding the purpose Excited as I was when my CSS hacking ended up providing the opportunity to publish my experiments on CSS-Tricks, the first comment on the first hack I published on CSS-Tricks was a generic, defeatist “Why would you do that?” criticism. The other comments popped up and turned out to be more supportive, and I said in a previous article that I’ve made my peace with the fact that not everybody will like my articles. However, this is the second article in which I’ve brought up the critical comment from back in 2021. Hmm… Surely it wasn’t the reason I didn’t write another CSS-Tricks article for years. And it’s probably a coincidence that when I returned to CSS-Tricks last year, my first new article was a CSS hack that lends itself to accessibility after the person who left the negative comment about my first article seemed to have a bee in their bonnet about checkbox hacks breaking accessibility, even in fun CSS games not intended for production. Then again, limiting myself to CSS hacking that enables accessibility became a source of inspiration. We can all do with a reminder to at all times empathize with users who require screen readers, even when we are doing wacky experimental stuff, because we need to embrace the limitations not just of CSS but of our audience. I suppose the reason the negative comment continues to rankle with me is that I agree that clarifying the relevance and purpose of a CSS trick is important. And yet, if I’m right in saying a CSS trick is more like something we discover than something we make, then it’s like finding a beautiful feather when we go for a walk. At first, we pick it up just because we can, but if I bring you with me on the journey that led to the discovery, then you can help me decide whether the significance is that the feather we discovered makes a great quill or reveals that a rare species of bird lives in this region. It’s a journey versus destination thing to share the failures that led to compromises and the limitations I came up against when pushing the boundaries of CSS. When I bring you along on the route to the curious item I found, rather than just showing you that item, then after we part ways, you might retrace the steps and try a different fork in the path we followed, which could lead you to discover your own CSS trick. How to Discover a CSS Trick originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  25. by: Abhishek Prakash Fri, 25 Jul 2025 17:17:39 +0530 I am a strong believer in micro-learning. Sure, we have long courses on huge topics like Linux and Ansible, but we also have courses on relatively smaller topics like SSH and Kubernetes Operators. Now, we have a new micro-course on systemd. Like most of our educational material, this too is a hands-on course. Meaning, you will learn by following the steps on your system. The systemd Playbook: Learn by DoingMaster systemd the practical way—one lab at a time.Linux HandbookTeam LHBThis systemd course is for Pro members only, so if you don't have a Pro membership yet, perhaps it is time to upgrade your membership. We have just launched the "lifetime membership" option. For a single payment, you get the Plus membership of the Learner tier forever. No recurring payment, you pay just once and enjoy forever. First 50 lifetime members get it for just $99 instead of $139. Get Lifetime membership for just $99       This post is for subscribers only Subscribe now Already have an account? Sign in

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.