
Everything posted by Blogger
-
Make Any File a Template Using This Hidden macOS Tool
by: Geoff Graham Mon, 10 Feb 2025 13:54:00 +0000 From MacRumors: This works for any kind of file, including HTML, CSS, JavaScriprt, or what have you. You can get there with CMD+i or right-click and select “Get info.” Make Any File a Template Using This Hidden macOS Tool originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
Make Any File a Template Using This Hidden macOS Tool
by: Geoff Graham Mon, 10 Feb 2025 13:54:00 +0000 From MacRumors: This works for any kind of file, including HTML, CSS, JavaScriprt, or what have you. You can get there with CMD+i or right-click and select “Get info.” Make Any File a Template Using This Hidden macOS Tool originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
5 Open-source Local AI Tools for Image Generation I Found Interesting
by: Abhishek Kumar
-
EXAMPLEARTICLE
by: Ruchi Mishra Fri, 07 Feb 2025 12:42:43 +0000 EXAMPLEARTICLECONTENT The post EXAMPLEARTICLE appeared first on The Crazy Programmer.
-
ContractCrab
by: aiparabellum.com Fri, 07 Feb 2025 08:38:37 +0000 ContractCrab is an innovative AI-driven platform designed to simplify and revolutionize the way businesses handle contract reviews. By leveraging advanced artificial intelligence, this tool enables users to review contracts in just one click, significantly improving negotiation processes and saving both time and resources. Whether you are a legal professional, a business owner, or an individual needing efficient contract management, ContractCrab ensures accuracy, speed, and cost-effectiveness in handling your legal documents. Features of ContractCrab ContractCrab offers a wide range of features that cater to varied contract management needs: AI Contract Review: Automatically analyze contracts for key clauses and potential risks. Contract Summarizer: Generate concise summaries to focus on the essential points. AI Contract Storage: Securely store contracts with end-to-end encryption. Contract Extraction: Extract key information and clauses from lengthy documents. Legal Automation: Automate repetitive legal processes for enhanced efficiency. Specialized Reviews: Provides tailored reviews for employment contracts, physician agreements, and more. These features are designed to reduce manual effort, improve contract comprehension, and ensure legal accuracy. How It Works Using ContractCrab is straightforward and user-friendly: Upload the Contract: Begin by uploading your document in .pdf, .docx, or .txt format. Review the Details: The AI analyzes the content, identifies redundancies, and highlights key sections. Manage the Changes: Accept or reject AI-suggested modifications to suit your requirements. Enjoy the Result: Receive a concise, legally accurate contract summary within seconds. This seamless process ensures that contracts are reviewed quickly and effectively, saving you time and effort. Benefits of ContractCrab ContractCrab provides numerous advantages to its users: Time-Saving: Complete contract reviews in seconds instead of days. Cost-Effective: With pricing as low as $3 per hour, it is far more affordable than hiring legal professionals. Accuracy: Eliminates human errors caused by fatigue or inattention. 24/7 Availability: Accessible anytime, eliminating scheduling constraints. Enhanced Negotiations: Streamlines the process, enabling users to focus on critical aspects of agreements. Data Security: Ensures end-to-end encryption and regular data backups for maximum protection. These benefits make ContractCrab an indispensable tool for businesses and individuals alike. Pricing ContractCrab offers competitive and transparent pricing plans: Starting at $3 per hour: Ideal for quick and efficient reviews. Monthly Subscription at $30: Provides unlimited access to all features. This affordability ensures that businesses of all sizes can leverage the platform’s advanced AI capabilities without overspending. Review ContractCrab has received positive feedback from professionals and users across industries: Ellen Hernandez, Contract Manager: “The most attractive pricing on the legal technology market. Excellent value for the features provided.” William Padilla, Chief Security Officer: “Promising project ahead. Looking forward to the launch!” Jonathan Quinn, Personal Assistant: “Top-tier process automation. It’s great to pre-check any document before it moves to the next step.” These testimonials highlight ContractCrab’s potential to transform contract management with its advanced features and affordability. Conclusion ContractCrab stands out as a cutting-edge solution for AI-powered contract review, offering exceptional accuracy, speed, and cost-efficiency. Its user-friendly interface and robust features cater to diverse needs, making it an indispensable tool for businesses and individuals. With pricing as low as $3 per hour, ContractCrab ensures accessibility without compromising quality. Whether you are managing employment contracts, legal agreements, or construction documents, this platform simplifies the process, enhances readability, and mitigates risks effectively. Visit Website The post ContractCrab appeared first on AI Parabellum.
-
ContractCrab
by: aiparabellum.com Fri, 07 Feb 2025 08:38:37 +0000 ContractCrab is an innovative AI-driven platform designed to simplify and revolutionize the way businesses handle contract reviews. By leveraging advanced artificial intelligence, this tool enables users to review contracts in just one click, significantly improving negotiation processes and saving both time and resources. Whether you are a legal professional, a business owner, or an individual needing efficient contract management, ContractCrab ensures accuracy, speed, and cost-effectiveness in handling your legal documents. Features of ContractCrab ContractCrab offers a wide range of features that cater to varied contract management needs: AI Contract Review: Automatically analyze contracts for key clauses and potential risks. Contract Summarizer: Generate concise summaries to focus on the essential points. AI Contract Storage: Securely store contracts with end-to-end encryption. Contract Extraction: Extract key information and clauses from lengthy documents. Legal Automation: Automate repetitive legal processes for enhanced efficiency. Specialized Reviews: Provides tailored reviews for employment contracts, physician agreements, and more. These features are designed to reduce manual effort, improve contract comprehension, and ensure legal accuracy. How It Works Using ContractCrab is straightforward and user-friendly: Upload the Contract: Begin by uploading your document in .pdf, .docx, or .txt format. Review the Details: The AI analyzes the content, identifies redundancies, and highlights key sections. Manage the Changes: Accept or reject AI-suggested modifications to suit your requirements. Enjoy the Result: Receive a concise, legally accurate contract summary within seconds. This seamless process ensures that contracts are reviewed quickly and effectively, saving you time and effort. Benefits of ContractCrab ContractCrab provides numerous advantages to its users: Time-Saving: Complete contract reviews in seconds instead of days. Cost-Effective: With pricing as low as $3 per hour, it is far more affordable than hiring legal professionals. Accuracy: Eliminates human errors caused by fatigue or inattention. 24/7 Availability: Accessible anytime, eliminating scheduling constraints. Enhanced Negotiations: Streamlines the process, enabling users to focus on critical aspects of agreements. Data Security: Ensures end-to-end encryption and regular data backups for maximum protection. These benefits make ContractCrab an indispensable tool for businesses and individuals alike. Pricing ContractCrab offers competitive and transparent pricing plans: Starting at $3 per hour: Ideal for quick and efficient reviews. Monthly Subscription at $30: Provides unlimited access to all features. This affordability ensures that businesses of all sizes can leverage the platform’s advanced AI capabilities without overspending. Review ContractCrab has received positive feedback from professionals and users across industries: Ellen Hernandez, Contract Manager: “The most attractive pricing on the legal technology market. Excellent value for the features provided.” William Padilla, Chief Security Officer: “Promising project ahead. Looking forward to the launch!” Jonathan Quinn, Personal Assistant: “Top-tier process automation. It’s great to pre-check any document before it moves to the next step.” These testimonials highlight ContractCrab’s potential to transform contract management with its advanced features and affordability. Conclusion ContractCrab stands out as a cutting-edge solution for AI-powered contract review, offering exceptional accuracy, speed, and cost-efficiency. Its user-friendly interface and robust features cater to diverse needs, making it an indispensable tool for businesses and individuals. With pricing as low as $3 per hour, ContractCrab ensures accessibility without compromising quality. Whether you are managing employment contracts, legal agreements, or construction documents, this platform simplifies the process, enhances readability, and mitigates risks effectively. Visit Website The post ContractCrab appeared first on AI Parabellum.
-
Installing Arch Linux with BTRFS and Disk Encryption
by: Sreenath
-
Container query units: cqi and cqb
by: Geoff Graham Thu, 06 Feb 2025 15:29:35 +0000 A little gem from Kevin Powell’s “HTML & CSS Tip of the Week” website, reminding us that using container queries opens up container query units for sizing things based on the size of the queried container. So, 1cqi is equivalent to 1% of the container’s inline size, and 1cqb is equal to 1% of the container’s block size. I’d be remiss not to mention the cqmin and cqmax units, which evaluate either the container’s inline or block size. So, we could say 50cqmax and that equals 50% of the container’s size, but it will look at both the container’s inline and block size, determine which is greater, and use that to calculate the final computed value. That’s a nice dash of conditional logic. It can help maintain proportions if you think the writing mode might change on you, such as moving from horizontal to vertical. Container query units: cqi and cqb originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
Deleting Linux Entry from Boot Menu from Windows After Removing Linux
by: Abhishek Prakash
-
FOSS Weekly #25.06: Linux inside PDF file, Missing Windows from Grub, GTK Drops X11 and More
by: Abhishek Prakash
-
Baseline Status in a WordPress Block
by: Geoff Graham Wed, 05 Feb 2025 14:58:18 +0000 You know about Baseline, right? And you may have heard that the Chrome team made a web component for it. Here it is! Of course, we could simply drop the HTML component into the page. But I never know where we’re going to use something like this. The Almanac, obs. But I’m sure there are times where embedded it in other pages and posts makes sense. That’s exactly what WordPress blocks are good for. We can take an already reusable component and make it repeatable when working in the WordPress editor. So that’s what I did! That component you see up there is the <baseline-status> web component formatted as a WordPress block. Let’s drop another one in just for kicks. Pretty neat! I saw that Pawel Grzybek made an equivalent for Hugo. There’s an Astro equivalent, too. Because I’m fairly green with WordPress block development I thought I’d write a bit up on how it’s put together. There are still rough edges that I’d like to smooth out later, but this is a good enough point to share the basic idea. Scaffolding the project I used the @wordpress/create-block package to bootstrap and initialize the project. All that means is I cd‘d into the /wp-content/plugins directory from the command line and ran the install command to plop it all in there. npm install @wordpress/create-block The command prompts you through the setup process to name the project and all that. The baseline-status.php file is where the plugin is registered. And yes, it’s looks completely the same as it’s been for years, just not in a style.css file like it is for themes. The difference is that the create-block package does some lifting to register the widget so I don’t have to: <?php /** * Plugin Name: Baseline Status * Plugin URI: https://css-tricks.com * Description: Displays current Baseline availability for web platform features. * Requires at least: 6.6 * Requires PHP: 7.2 * Version: 0.1.0 * Author: geoffgraham * License: GPL-2.0-or-later * License URI: https://www.gnu.org/licenses/gpl-2.0.html * Text Domain: baseline-status * * @package CssTricks */ if ( ! defined( 'ABSPATH' ) ) { exit; // Exit if accessed directly. } function csstricks_baseline_status_block_init() { register_block_type( __DIR__ . '/build' ); } add_action( 'init', 'csstricks_baseline_status_block_init' ); ?> The real meat is in src directory. The create-block package also did some filling of the blanks in the block-json file based on the onboarding process: { "$schema": "https://schemas.wp.org/trunk/block.json", "apiVersion": 2, "name": "css-tricks/baseline-status", "version": "0.1.0", "title": "Baseline Status", "category": "widgets", "icon": "chart-pie", "description": "Displays current Baseline availability for web platform features.", "example": {}, "supports": { "html": false }, "textdomain": "baseline-status", "editorScript": "file:./index.js", "editorStyle": "file:./index.css", "style": "file:./style-index.css", "render": "file:./render.php", "viewScript": "file:./view.js" } Going off some tutorials published right here on CSS-Tricks, I knew that WordPress blocks render twice — once on the front end and once on the back end — and there’s a file for each one in the src folder: render.php: Handles the front-end view edit.js: Handles the back-end view The front-end and back-end markup Cool. I started with the <baseline-status> web component’s markup: <script src="https://cdn.jsdelivr.net/npm/baseline-status@1.0.8/baseline-status.min.js" type="module"></script> <baseline-status featureId="anchor-positioning"></baseline-status> I’d hate to inject that <script> every time the block pops up, so I decided to enqueue the file conditionally based on the block being displayed on the page. This is happening in the main baseline-status.php file which I treated sorta the same way as a theme’s functions.php file. It’s just where helper functions go. // ... same code as before // Enqueue the minified script function csstricks_enqueue_block_assets() { wp_enqueue_script( 'baseline-status-widget-script', 'https://cdn.jsdelivr.net/npm/baseline-status@1.0.4/baseline-status.min.js', array(), '1.0.4', true ); } add_action( 'enqueue_block_assets', 'csstricks_enqueue_block_assets' ); // Adds the 'type="module"' attribute to the script function csstricks_add_type_attribute($tag, $handle, $src) { if ( 'baseline-status-widget-script' === $handle ) { $tag = '<script type="module" src="' . esc_url( $src ) . '"></script>'; } return $tag; } add_filter( 'script_loader_tag', 'csstricks_add_type_attribute', 10, 3 ); // Enqueues the scripts and styles for the back end function csstricks_enqueue_block_editor_assets() { // Enqueues the scripts wp_enqueue_script( 'baseline-status-widget-block', plugins_url( 'block.js', __FILE__ ), array( 'wp-blocks', 'wp-element', 'wp-editor' ), false, ); // Enqueues the styles wp_enqueue_style( 'baseline-status-widget-block-editor', plugins_url( 'style.css', __FILE__ ), array( 'wp-edit-blocks' ), false, ); } add_action( 'enqueue_block_editor_assets', 'csstricks_enqueue_block_editor_assets' ); The final result bakes the script directly into the plugin so that it adheres to the WordPress Plugin Directory guidelines. If that wasn’t the case, I’d probably keep the hosted script intact because I’m completely uninterested in maintaining it. Oh, and that csstricks_add_type_attribute() function is to help import the file as an ES module. There’s a wp_enqueue_script_module() action available to hook into that should handle that, but I couldn’t get it to do the trick. With that in hand, I can put the component’s markup into a template. The render.php file is where all the front-end goodness resides, so that’s where I dropped the markup: <baseline-status <?php echo get_block_wrapper_attributes(); ?> featureId="[FEATURE]"> </baseline-status> That get_block_wrapper_attibutes() thing is recommended by the WordPress docs as a way to output all of a block’s information for debugging things, such as which features it ought to support. [FEATURE]is a placeholder that will eventually tell the component which web platform to render information about. We may as well work on that now. I can register attributes for the component in block.json: "attributes": { "showBaselineStatus": { "featureID": { "type": "string" } }, Now we can update the markup in render.php to echo the featureID when it’s been established. <baseline-status <?php echo get_block_wrapper_attributes(); ?> featureId="<?php echo esc_html( $featureID ); ?>"> </baseline-status> There will be more edits to that markup a little later. But first, I need to put the markup in the edit.js file so that the component renders in the WordPress editor when adding it to the page. <baseline-status { ...useBlockProps() } featureId={ featureID }></baseline-status> useBlockProps is the JavaScript equivalent of get_block_wrapper_attibutes() and can be good for debugging on the back end. At this point, the block is fully rendered on the page when dropped in! The problems are: It’s not passing in the feature I want to display. It’s not editable. I’ll work on the latter first. That way, I can simply plug the right variable in there once everything’s been hooked up. Block settings One of the nicer aspects of WordPress DX is that we have direct access to the same controls that WordPress uses for its own blocks. We import them and extend them where needed. I started by importing the stuff in edit.js: import { InspectorControls, useBlockProps } from '@wordpress/block-editor'; import { PanelBody, TextControl } from '@wordpress/components'; import './editor.scss'; This gives me a few handy things: InspectorControls are good for debugging. useBlockProps are what can be debugged. PanelBody is the main wrapper for the block settings. TextControl is the field I want to pass into the markup where [FEATURE] currently is. editor.scss provides styles for the controls. Before I get to the controls, there’s an Edit function needed to use as a wrapper for all the work: export default function Edit( { attributes, setAttributes } ) { // Controls } First is InspectorTools and the PanelBody: export default function Edit( { attributes, setAttributes } ) { // React components need a parent element <> <InspectorControls> <PanelBody title={ __( 'Settings', 'baseline-status' ) }> // Controls </PanelBody> </InspectorControls> </> } Then it’s time for the actual text input control. I really had to lean on this introductory tutorial on block development for the following code, notably this section. export default function Edit( { attributes, setAttributes } ) { <> <InspectorControls> <PanelBody title={ __( 'Settings', 'baseline-status' ) }> // Controls <TextControl label={ __( 'Feature', // Input label 'baseline-status' ) } value={ featureID || '' } onChange={ ( value ) => setAttributes( { featureID: value } ) } /> </PanelBody> </InspectorControls> </> } Tie it all together At this point, I have: The front-end view The back-end view Block settings with a text input All the logic for handling state Oh yeah! Can’t forget to define the featureID variable because that’s what populates in the component’s markup. Back in edit.js: const { featureID } = attributes; In short: The feature’s ID is what constitutes the block’s attributes. Now I need to register that attribute so the block recognizes it. Back in block.json in a new section: "attributes": { "featureID": { "type": "string" } }, Pretty straightforward, I think. Just a single text field that’s a string. It’s at this time that I can finally wire it up to the front-end markup in render.php: <baseline-status <?php echo get_block_wrapper_attributes(); ?> featureId="<?php echo esc_html( $featureID ); ?>"> </baseline-status> Styling the component I struggled with this more than I care to admit. I’ve dabbled with styling the Shadow DOM but only academically, so to speak. This is the first time I’ve attempted to style a web component with Shadow DOM parts on something being used in production. If you’re new to Shadow DOM, the basic idea is that it prevents styles and scripts from “leaking” in or out of the component. This is a big selling point of web components because it’s so darn easy to drop them into any project and have them “just” work. But how do you style a third-party web component? It depends on how the developer sets things up because there are ways to allow styles to “pierce” through the Shadow DOM. Ollie Williams wrote “Styling in the Shadow DOM With CSS Shadow Parts” for us a while back and it was super helpful in pointing me in the right direction. Chris has one, too. A few other more articles I used: “Options for styling web components” (Nolan Lawson, super well done!) “Styling web components” (Chris Ferdinandi) “Styling” (webcomponents.guide) First off, I knew I could select the <baseline-status> element directly without any classes, IDs, or other attributes: baseline-status { /* Styles! */ } I peeked at the script’s source code to see what I was working with. I had a few light styles I could use right away on the type selector: baseline-status { background: #000; border: solid 5px #f8a100; border-radius: 8px; color: #fff; display: block; margin-block-end: 1.5em; padding: .5em; } I noticed a CSS color variable in the source code that I could use in place of hard-coded values, so I redefined them and set them where needed: baseline-status { --color-text: #fff; --color-outline: var(--orange); border: solid 5px var(--color-outline); border-radius: 8px; color: var(--color-text); display: block; margin-block-end: var(--gap); padding: calc(var(--gap) / 4); } Now for a tricky part. The component’s markup looks close to this in the DOM when fully rendered: <baseline-status class="wp-block-css-tricks-baseline-status" featureid="anchor-positioning"></baseline-status> <h1>Anchor positioning</h1> <details> <summary aria-label="Baseline: Limited availability. Supported in Chrome: yes. Supported in Edge: yes. Supported in Firefox: no. Supported in Safari: no."> <baseline-icon aria-hidden="true" support="limited"></baseline-icon> <div class="baseline-status-title" aria-hidden="true"> <div>Limited availability</div> <div class="baseline-status-browsers"> <!-- Browser icons --> </div> </div> </summary><p>This feature is not Baseline because it does not work in some of the most widely-used browsers.</p><p><a href="https://github.com/web-platform-dx/web-features/blob/main/features/anchor-positioning.yml">Learn more</a></p></details> <baseline-status class="wp-block-css-tricks-baseline-status" featureid="anchor-positioning"></baseline-status> I wanted to play with the idea of hiding the <h1> element in some contexts but thought twice about it because not displaying the title only really works for Almanac content when you’re on the page for the same feature as what’s rendered in the component. Any other context and the heading is a “need” for providing context as far as what feature we’re looking at. Maybe that can be a future enhancement where the heading can be toggled on and off. Voilà Get the plugin! This is freely available in the WordPress Plugin Directory as of today! This is my very first plugin I’ve submitted to WordPress on my own behalf, so this is really exciting for me! Get the plugin Future improvements This is far from fully baked but definitely gets the job done for now. In the future it’d be nice if this thing could do a few more things: Live update: The widget does not update on the back end until the page refreshes. I’d love to see the final rendering before hitting Publish on something. I got it where typing into the text input is instantly reflected on the back end. It’s just that the component doesn’t re-render to show the update. Variations: As in “large” and “small”. Heading: Toggle to hide or show, depending on where the block is used. Baseline Status in a WordPress Block originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
Missing Windows from Grub After Dual Boot? Here's What You Can Do
by: Abhishek Prakash
-
Game Development on Linux
By: Janus Atienza Tue, 04 Feb 2025 15:52:14 +0000 If you’ve ever thought about making games but assumed Linux wasn’t the right platform for it, think again! While Windows and macOS might dominate the game development scene, Linux has quietly built up an impressive toolkit for developers. Whether you’re an indie creator looking for open-source flexibility or a studio considering Linux support, the ecosystem has come a long way. From powerful game engines to robust development tools, Linux offers everything you need to build and test games. In this article, we’ll break down why Linux is worth considering, the best tools available, and how you can get started. Why Choose Linux for Game Development? If you’re wondering why anyone would develop games on Linux instead of Windows or macOS, the answer is simple: freedom, flexibility, and performance. First off, Linux is open-source, which means you aren’t locked into a specific ecosystem. You can customize your entire development environment, from the desktop interface to the compiler settings. No forced updates, no bloated background processes eating up resources — just an efficient workspace built exactly how you like it. Then there’s the stability and performance factor. Unlike Windows, which can sometimes feel sluggish with unnecessary background tasks, Linux runs lean. This is especially useful when you’re working with heavy game engines or compiling large projects. It’s why so many servers and supercomputers use Linux — it just works. Another big plus? Cost savings. Everything you need — IDEs, compilers, game engines, and creative tools — can be found for free. Instead of shelling out for expensive software licenses, you can reinvest that money into your project. And let’s not forget about growing industry support. Unity, Unreal Engine, and Godot all support Linux, and with platforms like Steam Deck running Linux-based SteamOS, game development for Linux is more relevant than ever. Sure, it’s not as mainstream as Windows, but if you’re looking for a powerful, flexible, and budget-friendly development setup, Linux is definitely worth considering. Best Game Engines for Linux If you’re developing games on Linux, you’ll be happy to know that several powerful game engines fully support it. Here are some of the best options: 1. Unity – The Industry Standard Unity is one of the most popular game engines out there, and yes, it supports Linux. The Unity Editor runs on Linux, though it’s still considered in “preview” mode. However, many game development companies like RetroStyle Games successfully use it for 2D and 3D game development. Plus, you can build games for multiple platforms, including Windows, macOS, mobile, and even consoles — all from Linux. 2. Unreal Engine – AAA-Quality Development If you’re aiming for high-end graphics, Unreal Engine is a great choice. It officially supports Linux, and while the Linux version of the editor might not be as polished as the Windows one, it still gets the job done. Unreal’s powerful rendering and blueprint system make it a top pick for ambitious projects. 3. Godot – The Open-Source Powerhouse If you love open-source software, Godot is a dream come true. It’s completely free, lightweight, and optimized for Linux. The engine supports both 2D and 3D game development and has its scripting language (GDScript) that’s easy to learn. Plus, since Godot itself is open-source, you can tweak the engine however you like. 4. Other Notable Mentions Defold – A lightweight engine with strong 2D capabilities. Love2D – Perfect for simple 2D games using Lua scripting. Stride – A promising C#-based open-source engine. Essential Tools for Linux Game Development Once you’ve picked your game engine, you’ll need the right tools to bring your game to life. Luckily, Linux has everything you need, from coding and design to audio and version control. 1. Code Editors &amp; IDEs If you’re writing code, you need a solid editor. VS Code is a favorite among game developers, with great support for C#, Python, and other languages. If you prefer something more powerful, JetBrains Rider is a top-tier choice for Unity developers. For those who like minimalism, Vim or Neovim can be customized to perfection. 2. Graphics &amp; Animation Tools Linux has some fantastic tools for art and animation. Blender is the go-to for 3D modeling and animation, while Krita and GIMP are excellent for 2D art and textures. If you’re working with pixel art, Aseprite (open-source version) is a fantastic option. 3. Audio Tools For sound effects and music, LMMS (like FL Studio but free) and Ardour (a powerful DAW) are solid choices. If you just need basic sound editing, Audacity is a lightweight but effective tool. 4. Version Control You don’t want to lose hours of work due to a crash. That’s where Git comes in. You can use GitHub, GitLab, or Bitbucket to store your project, collaborate with teammates, and roll back to previous versions when needed. With these tools, you’ll have everything you need to code, design, animate, and refine your game — all within Linux. And the best part? Most of them are free and open-source! Setting Up a Linux Development Environment Getting your Linux system ready for game development isn’t as complicated as it sounds. In fact, once you’ve set it up, you’ll have a lightweight, stable, and efficient workspace that’s perfect for coding, designing, and testing your game. First step: Pick the Right Linux Distro: Not all Linux distributions (distros) are built the same, so choosing the right one can save you a lot of headaches. If you want ease of use, go with Ubuntu or Pop!_OS — both have great driver support and a massive community for troubleshooting. If you prefer cutting-edge software, Manjaro or Fedora are solid picks. Second step: Install Essential Libraries &amp; Dependencies: Depending on your game engine, you may need to install extra libraries. For example, if you’re using Unity, you’ll want Mono and .NET SDK. Unreal Engine requires Clang and some development packages. Most of these can be installed easily via the package manager: sudo apt install build-essential git cmake For Arch-based distros, you’d use: sudo pacman -S base-devel git cmake Third step: Set Up Your Game Engine: Most popular engines work on Linux, but the setup varies: Unity: Download the Unity Hub (Linux version) and install the editor. Unreal Engine: Requires compiling from source via GitHub. Godot: Just download the binary, and you’re ready to go. Fourth step: Configure Development Tools: Install VS Code or JetBrains Rider for coding. Get Blender, Krita, or GIMP for custom 3D game art solutions. Set up Git for version control. Building & Testing Games on Linux Once you’ve got your game up and running in the engine, it’s time to build and test it. The good news? Linux makes this process smooth — especially if you’re targeting multiple platforms. 1. Compiling Your Game Most game engines handle the build process automatically, but if you're using a custom engine or working with compiled languages like C++, you’ll need a good build system. CMake and Make are commonly used for managing builds, while GCC and Clang are solid compilers for performance-heavy games. To compile, you’d typically run: cmake . make ./yourgame If you're working with Unity or Unreal, the built-in export tools will package your game for Linux, Windows, and more. 2. Performance Optimization Linux is great for debugging because it doesn’t have as many background processes eating up resources. To monitor performance, you can use: htop – For checking CPU and memory usage. glxinfo | grep "OpenGL version" – To verify your GPU drivers. Vulkan tools – If your game uses Vulkan for rendering. 3. Testing Across Different Hardware &amp; Distros Not all Linux systems are the same, so it’s a good idea to test your game on multiple distros. Tools like Flatpak and AppImage help create portable builds that work across different Linux versions. If you're planning to distribute on Steam its Proton compatibility layer can help test how well your game runs. Challenges & Limitations While Linux is a great platform for game development, it isn’t without its challenges. If you’re coming from Windows or macOS, you might run into a few roadblocks — but nothing that can’t be worked around. Some industry-standard tools, like Adobe Photoshop, Autodesk Maya, and certain middleware, don’t have native Linux versions. Luckily, there are solid alternatives like GIMP, Krita, and Blender, but if you absolutely need a Windows-only tool, Wine or a virtual machine might be your best bet. While Linux has come a long way with hardware support, GPU drivers can still be tricky. NVIDIA’s proprietary drivers work well but sometimes require extra setup, while AMD’s open-source drivers are generally more stable but may lag in some optimizations. If you’re using Vulkan, make sure your drivers are up to date for the best performance. Linux gaming has grown, especially with Steam Deck and Proton, but it’s still a niche market. If you’re planning to sell a game, Windows and consoles should be your priority — Linux can be a nice bonus, but not the main target unless you’re making something for the open-source community. Despite these challenges, many developers like RetroStyle Games successfully create games on Linux. The key is finding the right workflow and tools that work for you. And with the growing support from game engines and platforms, Linux game development is only getting better! Conclusion So, is Linux a good choice for game development? Absolutely — but with some caveats. If you value customization, performance, and open-source tools, Linux gives you everything you need to build amazing games. Plus, with engines like Unity, Unreal, and Godot supporting Linux, developing on this platform is more viable than ever. That said, it isn’t all smooth sailing. You might have to tweak drivers, find alternatives to proprietary software, and troubleshoot compatibility issues. But if you’re willing to put in the effort, Linux rewards you with a fast, stable, and distraction-free development environment. At the end of the day, whether Linux is right for you depends on your workflow and project needs. If you’re curious, why not set up a test environment and give it a shot? You might be surprised at how much you like it! The post Game Development on Linux appeared first on Unixmen.
-
ArmSoM CM5: Powerful Replacement for Raspberry Pi CM4
by: Abhishek Kumar
-
Chris’ Corner: Offlinin’ Aint Easy
by: Chris Coyier Mon, 03 Feb 2025 17:27:05 +0000 I kinda like the idea of the “minimal” service worker. Service Workers can be pretty damn complicated and the power of them honestly makes me a little nervous. They are middlemen between the browser and the network and I can imagine really dinking that up, myself. Not to dissuade you from using them, as they can useful things no other technology can do. That’s why I like the “minimal” part. I want to understand what it’s doing extremely clearly! The less code the better. Tantek posted about that recently, with a minimal idea: That seems clearly useful. The bit about linking to an archive of the page though seems a smidge off to me. If the reason a user can’t see the page is because they are offline, a page that sends them to the Internet Archive isn’t going to work either. But I like the bit about caching and at least trying to do something. Jeremy Keith did some thinking about this back in 2018 as well: The implementation is actually just a few lines of code. A variation of it handles Tantek’s idea as well, implementing a custom offline page that could do the thing where it links off to an archive elsewhere. I’ll leave you with a couple more links. Have you heard the term LoFi? I’m not the biggest fan of the shortening of it because “Lo-fi” is a pretty established musical term not to mention “low fidelity” is useful in all sorts of contexts. But recently in web tech it refers to “Local First”. I dig the idea honestly and do see it as a place for technology (and companies that make technology) to step and really make this style of working easy. Plenty of stuff already works this way. I think of the Notes app on my phone. Those notes are always available. It doesn’t (seem to) care if I’m online or offline. If I’m online, they’ll sync up with the cloud so other devices and backups will have the latest, but if not, so be it. It better as heck work that way! And I’m glad it does, but lots of stuff on the web does not (CodePen doesn’t). But I’d like to build stuff that works that way and have it not be some huge mountain to climb. That eh, we’ll just sync later/whenever when we have network access is super non-trivial, is part of the issue. Technology could make easy/dumb choices like “last write wins”, but that tends to be dangerous data-loss territory that users don’t put up with. Instead data need to be intelligently merged, and that isn’t easy. Dropbox is multi-billion dollar company that deals with this and they admittedly don’t always have it perfect. One of the major solutions is the concept of CRDTs, which are an impressive idea to say the least, but are complex enough that most of us will gently back away. So I’ll simply leave you with A Gentle Introduction to CRDTs.
-
Compiling CSS With Vite and Lightning CSS
by: Ryan Trimble Mon, 03 Feb 2025 15:23:37 +0000 Suppose you follow CSS feature development as closely as we do here at CSS-Tricks. In that case, you may be like me and eager to use many of these amazing tools but find browser support sometimes lagging behind what might be considered “modern” CSS (whatever that means). Even if browser vendors all have a certain feature released, users might not have the latest versions! We can certainly plan for this a number of ways: feature detection with @supports progressively enhanced designs polyfills For even extra help, we turn to build tools. Chances are, you’re already using some sort of build tool in your projects today. CSS developers are most likely familiar with CSS pre-processors (such as Sass or Less), but if you don’t know, these are tools capable of compiling many CSS files into one stylesheet. CSS pre-processors help make organizing CSS a lot easier, as you can move parts of CSS into related folders and import things as needed. Pre-processors do not just provide organizational superpowers, though. Sass gave us a crazy list of features to work with, including: extends functions loops mixins nesting variables …more, probably! For a while, this big feature set provided a means of filling gaps missing from CSS, making Sass (or whatever preprocessor you fancy) feel like a necessity when starting a new project. CSS has evolved a lot since the release of Sass — we have so many of those features in CSS today — so it doesn’t quite feel that way anymore, especially now that we have native CSS nesting and custom properties. Along with CSS pre-processors, there’s also the concept of post-processing. This type of tool usually helps transform compiled CSS in different ways, like auto-prefixing properties for different browser vendors, code minification, and more. PostCSS is the big one here, giving you tons of ways to manipulate and optimize your code, another step in the build pipeline. In many implementations I’ve seen, the build pipeline typically runs roughly like this: Generate static assets Build application files Bundle for deployment CSS is usually handled in that first part, which includes running CSS pre- and post-processors (though post-processing might also happen after Step 2). As mentioned, the continued evolution of CSS makes it less necessary for a tool such as Sass, so we might have an opportunity to save some time. Vite for CSS Awarded “Most Adopted Technology” and “Most Loved Library” from the State of JavaScript 2024 survey, Vite certainly seems to be one of the more popular build tools available. Vite is mainly used to build reactive JavaScript front-end frameworks, such as Angular, React, Svelte, and Vue (made by the same developer, of course). As the name implies, Vite is crazy fast and can be as simple or complex as you need it, and has become one of my favorite tools to work with. Vite is mostly thought of as a JavaScript tool for JavaScript projects, but you can use it without writing any JavaScript at all. Vite works with Sass, though you still need to install Sass as a dependency to include it in the build pipeline. On the other hand, Vite also automatically supports compiling CSS with no extra steps. We can organize our CSS code how we see fit, with no or very minimal configuration necessary. Let’s check that out. We will be using Node and npm to install Node packages, like Vite, as well as commands to run and build the project. If you do not have node or npm installed, please check out the download page on their website. Navigate a terminal to a safe place to create a new project, then run: npm create vite@latest The command line interface will ask a few questions, you can keep it as simple as possible by choosing Vanilla and JavaScript which will provide you with a starter template including some no-frameworks-attached HTML, CSS, and JavaScript files to help get you started. Before running other commands, open the folder in your IDE (integrated development environment, such as VSCode) of choice so that we can inspect the project files and folders. If you would like to follow along with me, delete the following files that are unnecessary for demonstration: assets/ public/ src/ .gitignore We should only have the following files left in out project folder: index.html package.json Let’s also replace the contents of index.html with an empty HTML template: <!doctype html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>CSS Only Vite Project</title> </head> <body> <!-- empty for now --> </body> </html> One last piece to set up is Vite’s dependencies, so let’s run the npm installation command: npm install A short sequence will occur in the terminal. Then we’ll see a new folder called node_modules/ and a package-lock.json file added in our file viewer. node_modules is used to house all package files installed through node package manager, and allows us to import and use installed packages throughout our applications. package-lock.json is a file usually used to make sure a development team is all using the same versions of packages and dependencies. We most likely won’t need to touch these things, but they are necessary for Node and Vite to process our code during the build. Inside the project’s root folder, we can create a styles/ folder to contain the CSS we will write. Let’s create one file to begin with, main.css, which we can use to test out Vite. ├── public/ ├── styles/ | └── main.css └──index.html In our index.html file, inside the <head> section, we can include a <link> tag pointing to the CSS file: <head> <meta charset="UTF-8" /> <link rel="icon" type="image/svg+xml" href="/vite.svg" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>CSS Only Vite Project</title> <!-- Main CSS --> <link rel="stylesheet" href="styles/main.css"> </head> Let’s add a bit of CSS to main.css: body { background: green; } It’s not much, but it’s all we’ll need at the moment! In our terminal, we can now run the Vite build command using npm: npm run build With everything linked up properly, Vite will build things based on what is available within the index.html file, including our linked CSS files. The build will be very fast, and you’ll be returned to your terminal prompt. Vite will provide a brief report, showcasing the file sizes of the compiled project. The newly generated dist/ folder is Vite’s default output directory, which we can open and see our processed files. Checking out assets/index.css (the filename will include a unique hash for cache busting), and you’ll see the code we wrote, minified here. Now that we know how to make Vite aware of our CSS, we will probably want to start writing more CSS for it to compile. As quick as Vite is with our code, constantly re-running the build command would still get very tedious. Luckily, Vite provides its own development server, which includes a live environment with hot module reloading, making changes appear instantly in the browser. We can start the Vite development server by running the following terminal command: npm run dev Vite uses the default network port 5173 for the development server. Opening the http://localhost:5137/ address in your browser will display a blank screen with a green background. Adding any HTML to the index.html or CSS to main.css, Vite will reload the page to display changes. To stop the development server, use the keyboard shortcut CTRL+C or close the terminal to kill the process. At this point, you pretty much know all you need to know about how to compile CSS files with Vite. Any CSS file you link up will be included in the built file. Organizing CSS into Cascade Layers One of the items on my 2025 CSS Wishlist is the ability to apply a cascade layer to a link tag. To me, this might be helpful to organize CSS in a meaningful ways, as well as fine control over the cascade, with the benefits cascade layers provide. Unfortunately, this is a rather difficult ask when considering the way browsers paint styles in the viewport. This type of functionality is being discussed between the CSS Working Group and TAG, but it’s unclear if it’ll move forward. With Vite as our build tool, we can replicate the concept as a way to organize our built CSS. Inside the main.css file, let’s add the @layer at-rule to set the cascade order of our layers. I’ll use a couple of layers here for this demo, but feel free to customize this setup to your needs. /* styles/main.css */ @layer reset, layouts; This is all we’ll need inside our main.css, let’s create another file for our reset. I’m a fan of my friend Mayank‘s modern CSS reset, which is available as a Node package. We can install the reset by running the following terminal command: npm install @acab/reset.css Now, we can import Mayank’s reset into our newly created reset.css file, as a cascade layer: /* styles/reset.css */ @import '@acab/reset.css' layer(reset); If there are any other reset layer stylings we want to include, we can open up another @layer reset block inside this file as well. /* styles/reset.css */ @import '@acab/reset.css' layer(reset); @layer reset { /* custom reset styles */ } This @import statement is used to pull packages from the node_modules folder. This folder is not generally available in the built, public version of a website or application, so referencing this might cause problems if not handled properly. Now that we have two files (main.css and reset.css), let’s link them up in our index.html file. Inside the <head> tag, let’s add them after <title>: <head> <meta charset="UTF-8" /> <link rel="icon" type="image/svg+xml" href="/vite.svg" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>CSS Only Vite Project</title> <link rel="stylesheet" href="styles/main.css"> <link rel="stylesheet" href="styles/reset.css"> </head> The idea here is we can add each CSS file, in the order we need them parsed. In this case, I’m planning to pull in each file named after the cascade layers setup in the main.css file. This may not work for every setup, but it is a helpful way to keep in mind the precedence of how cascade layers affect computed styles when rendered in a browser, as well as grouping similarly relevant files. Since we’re in the index.html file, we’ll add a third CSS <link> for styles/layouts.css. <head> <meta charset="UTF-8" /> <link rel="icon" type="image/svg+xml" href="/vite.svg" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>CSS Only Vite Project</title> <link rel="stylesheet" href="styles/main.css"> <link rel="stylesheet" href="styles/reset.css"> <link rel="stylesheet" href="styles/layouts.css"> </head> Create the styles/layouts.css file with the new @layer layouts declaration block, where we can add layout-specific stylings. /* styles/layouts.css */ @layer layouts { /* layouts styles */ } For some quick, easy, and awesome CSS snippets, I tend to refer to Stephanie Eckles‘ SmolCSS project. Let’s grab the “Smol intrinsic container” code and include it within the layouts cascade layer: /* styles/layouts.css */ @layer layouts { .smol-container { width: min(100% - 3rem, var(--container-max, 60ch)); margin-inline: auto; } } This powerful little, two-line container uses the CSS min() function to provide a responsive width, with margin-inline: auto; set to horizontally center itself and contain its child elements. We can also dynamically adjust the width using the --container-max custom property. Now if we re-run the build command npm run build and check the dist/ folder, our compiled CSS file should contain: Our cascade layer declarations from main.css Mayank’s CSS reset fully imported from reset.css The .smol-container class added from layouts.csss As you can see, we can get quite far with Vite as our build tool without writing any JavaScript. However, if we choose to, we can extend our build’s capabilities even further by writing just a little bit of JavaScript. Post-processing with LightningCSS Lightning CSS is a CSS parser and post-processing tool that has a lot of nice features baked into it to help with cross-compatibility among browsers and browser versions. Lightning CSS can transform a lot of modern CSS into backward-compatible styles for you. We can install Lightning CSS in our project with npm: npm install --save-dev lightningcss The --save-dev flag means the package will be installed as a development dependency, as it won’t be included with our built project. We can include it within our Vite build process, but first, we will need to write a tiny bit of JavaScript, a configuration file for Vite. Create a new file called: vite.config.mjs and inside add the following code: // vite.config.mjs export default { css: { transformer: 'lightningcss' }, build: { cssMinify: 'lightningcss' } }; Vite will now use LightningCSS to transform and minify CSS files. Now, let’s give it a test run using an oklch color. Inside main.css let’s add the following code: /* main.css */ body { background-color: oklch(51.98% 0.1768 142.5); } Then re-running the Vite build command, we can see the background-color property added in the compiled CSS: /* dist/index.css */ body { background-color: green; background-color: color(display-p3 0.216141 0.494224 0.131781); background-color: lab(46.2829% -47.5413 48.5542); } Lightning CSS converts the color white providing fallbacks available for browsers which might not support newer color types. Following the Lightning CSS documentation for using it with Vite, we can also specify browser versions to target by installing the browserslist package. Browserslist will give us a way to specify browsers by matching certain conditions (try it out online!) npm install -D browserslist Inside our vite.config.mjs file, we can configure Lightning CSS further. Let’s import the browserslist package into the Vite configuration, as well as a module from the Lightning CSS package to help us use browserlist in our config: // vite.config.mjs import browserslist from 'browserslist'; import { browserslistToTargets } from 'lightningcss'; We can add configuration settings for lightningcss, containing the browser targets based on specified browser versions to Vite’s css configuration: // vite.config.mjs import browserslist from 'browserslist'; import { browserslistToTargets } from 'lightningcss'; export default { css: { transformer: 'lightningcss', lightningcss: { targets: browserslistToTargets(browserslist('>= 0.25%')), } }, build: { cssMinify: 'lightningcss' } }; There are lots of ways to extend Lightning CSS with Vite, such as enabling specific features, excluding features we won’t need, or writing our own custom transforms. // vite.config.mjs import browserslist from 'browserslist'; import { browserslistToTargets, Features } from 'lightningcss'; export default { css: { transformer: 'lightningcss', lightningcss: { targets: browserslistToTargets(browserslist('>= 0.25%')), // Including `light-dark()` and `colors()` functions include: Features.LightDark | Features.Colors, } }, build: { cssMinify: 'lightningcss' } }; For a full list of the Lightning CSS features, check out their documentation on feature flags. Is any of this necessary? Reading through all this, you may be asking yourself if all of this is really necessary. The answer: absolutely not! But I think you can see the benefits of having access to partialized files that we can compile into unified stylesheets. I doubt I’d go to these lengths for smaller projects, however, if building something with more complexity, such as a design system, I might reach for these tools for organizing code, cross-browser compatibility, and thoroughly optimizing compiled CSS. Compiling CSS With Vite and Lightning CSS originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
- Using Emacs as Terminal Multiplexer on Windows
-
Chrome 133 Goodies
by: Geoff Graham Fri, 31 Jan 2025 15:27:50 +0000 I often wonder what it’s like working for the Chrome team. You must get issued some sort of government-level security clearance for the latest browser builds that grants you permission to bash on them ahead of everyone else and come up with these rad demos showing off the latest features. No, I’m, not jealous, why are you asking? Totally unrelated, did you see the release notes for Chrome 133? It’s currently in beta, but the Chrome team has been publishing a slew of new articles with pretty incredible demos that are tough to ignore. I figured I’d round those up in one place. attr() for the masses! We’ve been able to use HTML attributes in CSS for some time now, but it’s been relegated to the content property and only parsed strings. <h1 data-color="orange">Some text</h1> h1::before { content: ' (Color: ' attr(data-color) ') '; } Bramus demonstrates how we can now use it on any CSS property, including custom properties, in Chrome 133. So, for example, we can take the attribute’s value and put it to use on the element’s color property: h1 { color: attr(data-color type(<color>), #fff) } This is a trite example, of course. But it helps illustrate that there are three moving pieces here: the attribute (data-color) the type (type(<color>)) the fallback value (#fff) We make up the attribute. It’s nice to have a wildcard we can insert into the markup and hook into for styling. The type() is a new deal that helps CSS know what sort of value it’s working with. If we had been working with a numeric value instead, we could ditch that in favor of something less verbose. For example, let’s say we’re using an attribute for the element’s font size: <div data-size="20">Some text</div> Now we can hook into the data-size attribute and use the assigned value to set the element’s font-size property, based in px units: h1 { color: attr(data-size px, 16); } CodePen Embed Fallback The fallback value is optional and might not be necessary depending on your use case. Scroll states in container queries! This is a mind-blowing one. If you’ve ever wanted a way to style a sticky element when it’s in a “stuck” state, then you already know how cool it is to have something like this. Adam Argyle takes the classic pattern of an alphabetical list and applies styles to the letter heading when it sticks to the top of the viewport. The same is true of elements with scroll snapping and elements that are scrolling containers. In other words, we can style elements when they are “stuck”, when they are “snapped”, and when they are “scrollable”. Quick little example that you’ll want to open in a Chromium browser: CodePen Embed Fallback The general idea (and that’s all I know for now) is that we register a container… you know, a container that we can query. We give that container a container-type that is set to the type of scrolling we’re working with. In this case, we’re working with sticky positioning where the element “sticks” to the top of the page. .sticky-nav { container-type: scroll-state; } A container can’t query itself, so that basically has to be a wrapper around the element we want to stick. Menus are a little funny because we have the <nav> element and usually stuff it with an unordered list of links. So, our <nav> can be the container we query since we’re effectively sticking an unordered list to the top of the page. <nav class="sticky-nav"> <ul> <li><a href="#">Home</a></li> <li><a href="#">About</a></li> <li><a href="#">Blog</a></li> </ul> </nav> We can put the sticky logic directly on the <nav> since it’s technically holding what gets stuck: .sticky-nav { container-type: scroll-state; /* set a scroll container query */ position: sticky; /* set sticky positioning */ top: 0; /* stick to the top of the page */ } I supposed we could use the container shorthand if we were working with multiple containers and needed to distinguish one from another with a container-name. Either way, now that we’ve defined a container, we can query it using @container! In this case, we declare the type of container we’re querying: @container scroll-state() { } And we tell it the state we’re looking for: @container scroll-state(stuck: top) { If we were working with a sticky footer instead of a menu, then we could say stuck: bottom instead. But the kicker is that once the <nav> element sticks to the top, we get to apply styles to it in the @container block, like so: .sticky-nav { border-radius: 12px; container-type: scroll-state; position: sticky; top: 0; /* When the nav is in a "stuck" state */ @container scroll-state(stuck: top) { border-radius: 0; box-shadow: 0 3px 10px hsl(0 0 0 / .25); width: 100%; } } It seems to work when nesting other selectors in there. So, for example, we can change the links in the menu when the navigation is in its stuck state: .sticky-nav { /* Same as before */ a { color: #000; font-size: 1rem; } /* When the nav is in a "stuck" state */ @container scroll-state(stuck: top) { /* Same as before */ a { color: orangered; font-size: 1.5rem; } } } So, yeah. As I was saying, it must be pretty cool to be on the Chrome developer team and get ahead of stuff like this, as it’s released. Big ol’ thanks to Bramus and Adam for consistently cluing us in on what’s new and doing the great work it takes to come up with such amazing demos to show things off. Chrome 133 Goodies originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
Keeping the page interactive while a View Transition is running
by: Geoff Graham Fri, 31 Jan 2025 14:11:00 +0000 ::view-transition /* 👈 Captures all the clicks! */ └─ ::view-transition-group(root) └─ ::view-transition-image-pair(root) ├─ ::view-transition-old(root) └─ ::view-transition-new(root) The trick? It’s that sneaky little pointer-events property! Slapping it directly on the :view-transition allows us to click “under” the pseudo-element, meaning the full page is interactive even while the view transition is running. ::view-transition { pointer-events: none; } I always, always, always forget about pointer-events, so thanks to Bramus for posting this little snippet. I also appreciate the additional note about removing the :root element from participating in the view transition: :root { view-transition-name: none; } He quotes the spec noting the reason why snapshots do not respond to hit-testing: Keeping the page interactive while a View Transition is running originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
9 Cloudflare Tunnels Alternatives for Self-Hosters
by: Abhishek Kumar Fri, 31 Jan 2025 17:03:02 +0530 I’ve been using Cloudflare Tunnel for over a year, and while it’s great for hosting static HTML content securely, it has its limitations. For instance, if you’re running something like Jellyfin, you might run into issues with bandwidth limits, which can lead to account bans due to their terms of service. Cloudflare Tunnel is designed with lightweight use cases in mind, but what if you need something more robust and self-hosted? Let me introduce you to some fantastic open-source alternatives that can give you the freedom to host your services without restrictions. 1. ngrok (OSS Edition)ngrok is a globally distributed reverse proxy designed to secure, protect, and accelerate your applications and network services, regardless of where they are hosted. Acting as the front door to your applications, ngrok integrates a reverse proxy, firewall, API gateway, and global load balancing into one seamless solution. Although the original open-source version of ngrok (v1) is no longer maintained, the platform continues to contribute to the open-source ecosystem with tools like Kubernetes operators and SDKs for popular programming languages such as Python, JavaScript, Go, Rust, and Java. Key features: Securely connect APIs and databases across networks without complex configurations.Expose local applications to the internet for demos and testing without deployment.Simplify development by inspecting and replaying HTTP callback requests.Implement advanced traffic policies like rate limiting and authentication with a global gateway-as-a-service.Control device APIs securely from the cloud using ngrok on IoT devices.Capture, inspect, and replay traffic to debug and optimize web services.Includes SDKs and integrations for popular programming languages to streamline workflows.ngrok2. frp (Fast Reverse Proxy)frp (Fast Reverse Proxy) is a high-performance tool designed to expose local servers located behind NAT or firewalls to the internet. Supporting protocols like TCP, UDP, HTTP, and HTTPS, frp enables seamless request forwarding to internal services via custom domain names. It also includes a peer-to-peer (P2P) connection mode for direct communication, making it a versatile solution for developers and system administrators. Key features:Expose local servers securely, even behind NAT or firewalls, using TCP, UDP, HTTP, or HTTPS protocols.Provide token and OIDC authentication for secure connections.Support advanced configurations such as encryption, compression, and TLS for enhanced security.Enable efficient traffic handling with features like TCP stream multiplexing, QUIC protocol support, and connection pooling.Facilitate monitoring and management through a server dashboard, client admin UI, and Prometheus integration.Offer flexible routing options, including URL routing, custom subdomains, and HTTP header rewriting.Implement load balancing and service health checks for reliable performance.Allow for port reuse, port range mapping, and bandwidth limits for granular control.Simplify SSH tunneling with a built-in SSH Tunnel Gateway.Fast reverse proxy3. localtunnelLocaltunnel is an open-source, self-hosted tool that simplifies the process of exposing local web services to the internet. By creating a secure tunnel, Localtunnel allows developers to share their local resources without needing to configure DNS or firewall settings. It’s built on Node.js and can be easily installed using npm. While Localtunnel is straightforward and effective, the project hasn't seen active maintenance since 2022, and the default Localtunnel.me server's long-term reliability is uncertain. However, you can host your own Localtunnel server for better control and scalability. Key featuresSecure HTTPS for all tunnels, ensuring safe connections.Share your local development environment with a unique, publicly accessible URL.Test webhooks and external API callbacks with ease.Integrate with cloud-based browser testing tools for UI testing.Restart your local server seamlessly, as Localtunnel automatically reconnects.Request a custom subdomain or proxy to a hostname other than localhost for added flexibility.localtunnel4. boringproxyboringproxy is a reverse proxy and tunnel manager designed to simplify the process of securely exposing self-hosted web services to the internet. Whether you're running a personal website, Nextcloud, Jellyfin, or other services behind a NAT or firewall, boringproxy handles all the complexities, including HTTPS certificate management and NAT traversal, without requiring port forwarding or extensive configuration. It’s built with self-hosters in mind, offering a simple, fast, and secure solution for remote access. Key features100% free and open source under the MIT license, ensuring transparency and flexibility.No configuration files required—boringproxy works with sensible defaults and simple CLI parameters for easy adjustments. No need for port forwarding, NAT traversal, or firewall rule configuration, as boringproxy handles it all.End-to-end encryption with optional TLS termination at the server, client, or application, integrated seamlessly with Let's Encrypt.Fast web GUI for managing tunnels, which works great on both desktop and mobile browsers.Fully configurable through an HTTP API, allowing for automation and integration with other tools.Cross-platform support on Linux, Windows, Mac, and ARM devices (e.g., Raspberry Pi and Android).SSH support for those who prefer using a standard SSH client for tunnel management.boringproxy5. zrokzrok is a next-generation, peer-to-peer sharing platform built on OpenZiti, a programmable zero-trust network overlay. It enables users to share resources securely, both publicly and privately, without altering firewall or network configurations. Designed for technical users, zrok provides frictionless sharing of HTTP, TCP, and UDP resources, along with files and custom content. Share resources with non-zrok users over the public internet or directly with other zrok users in a peer-to-peer manner.Works seamlessly on Windows, macOS, and Linux systems.Start sharing within minutes using the zrok.io service. Download the binary, create an account, enable your environment, and share with a single command.Easily expose local resources like localhost:8080 to public users without compromising security.Share "network drives" publicly or privately and mount them on end-user systems for easy access.Integrate zrok’s sharing capabilities into your applications with the Go SDK, which supports net.Conn and net.Listener for familiar development workflows.Deploy zrok on a Raspberry Pi or scale it for large service instances. The single binary contains everything needed to operate your own zrok environment.Leverages OpenZiti’s zero-trust principles for secure and programmable network overlays.zrok6. PagekitePageKite is a veteran in the tunneling space, providing HTTP(S) and TCP tunnels for more than 14 years. It offers features like IP whitelisting, password authentication, and supports custom domains. While the project is completely open-source and written in Python, the public service imposes limits, such as bandwidth caps, to prevent abuse. Users can unlock additional features and higher bandwidth through affordable payment plans. The free tier provides 2 GB of monthly transfer quota and supports custom domains, making it accessible for personal and small-scale use. Key featuresEnables any computer, such as a Raspberry Pi, laptop, or even old cell phones, to act as a server for hosting services like WordPress, Nextcloud, or Mastodon while keeping your home IP hidden.Provides simplified SSH access to mobile or virtual machines and ensures privacy by keeping firewall ports closed.Supports embedded developers with features like naming and accessing devices in the field, secure communications via TLS, and scaling solutions for both lightweight and large-scale deployments.Offers web developers the ability to test and debug work remotely, interact with secure APIs, and run webhooks, API servers, or Git repositories directly from their systems.Utilizes a global relay network to ensure low latency, high availability, and redundancy, with infrastructure managed since 2010.Ensures privacy by routing all traffic through its relays, hiding your IP address, and supporting both end-to-end and wildcard TLS encryption.Pagekite7. ChiselChisel is a fast and efficient TCP/UDP tunneling tool transported over HTTP and secured using SSH. Written in Go (Golang), Chisel is designed to bypass firewalls and provide a secure endpoint into your network. It is distributed as a single executable that functions as both client and server, making it easy to set up and use. Key featuresOffers a simple setup process with a single executable for both client and server functionality.Secures connections using SSH encryption and supports authenticated client and server connections through user configuration files and fingerprint matching.Automatically reconnects clients with exponential backoff, ensuring reliability in unstable networks.Allows clients to create multiple tunnel endpoints over a single TCP connection, reducing overhead and complexity.Supports reverse port forwarding, enabling connections to pass through the server and exit via the client.Provides optional SOCKS5 support for both clients and servers, offering additional flexibility in routing traffic.Enables tunneling through SOCKS or HTTP CONNECT proxies and supports SSH over HTTP using ssh -o ProxyCommand.Performs efficiently, making it suitable for high-performance requirements.Chisel8. TelebitTelebit has quickly become one of my favorite tunneling tools, and it’s easy to see why. It's still fairly new but does a great job of getting things done. By installing Telebit Remote on any device, be it your laptop, Raspberry Pi, or another device, you can easily access it from anywhere. The magic happens thanks to a relay system that allows multiplexed incoming connections on any external port, making remote access a breeze. Not only that, but it also lets you share files and configure it like a VPN. Key featuresShare files securely between devicesAccess your Raspberry Pi or other devices from behind a firewallUse it like a VPN for additional privacy and controlSSH over HTTPS, even on networks with restricted portsSimple setup with clear documentation and an installer script that handles everythingTelebit9. tunnel.pyjam.asAs a web developer, one of my favorite tools for quickly sharing projects with clients is tunnel.pyjam.as. It allows you to set up SSL-terminated, ephemeral HTTP tunnels to your local machine without needing to install any custom software, thanks to Wireguard. It’s perfect for when you want to quickly show someone a project you’re working on without the hassle of complex configurations. Key featuresNo software installation required, thanks to WireguardQuickly set up a reverse proxy to share your local servicesSSL-terminated tunnels for secure connectionsSimple to use with just a curl command to start and stop tunnelsIdeal for quick demos or temporary access to local projectstunnel.pyjam.asFinal thoughtsWhen it comes to tunneling tools, there’s no shortage of options and each of the projects we’ve discussed here offers something unique. Personally, I’m too deeply invested in Cloudflare Tunnel to stop using it anytime soon. It’s become a key part of my workflow, and I rely on it for many of my use cases. However, that doesn’t mean I won’t continue exploring these open-source alternatives. I’m always excited to see how they evolve. For instance, with tunnel.pyjam.as, I find it incredibly time-saving to simply edit the tunnel.conf file and run its WireGuard instance to quickly share my projects with clients. I’d love to hear what you think! Have you tried any of these open-source tunneling tools, or do you have your own favorites? Let me know in the comments.
-
How to Install DeepSeek R1 Locally on Linux
by: Abhishek Kumar
-
Content Marketing for Linux/Unix Businesses: Why Outsourcing Makes Sense
By: Janus Atienza Fri, 31 Jan 2025 00:11:21 +0000 In today’s competitive digital landscape, small businesses need to leverage every tool and strategy available to stay relevant and grow. One such strategy is content marketing, which has proven to be an effective way to reach, engage, and convert potential customers. However, for many small business owners, managing content creation and distribution can be time-consuming and resource-intensive. This is where outsourcing content marketing services comes into play. Let’s explore why this approach is not only smart but also essential for the long-term success of small businesses. 1. Expertise and Professional QualityOutsourcing content marketing services allows small businesses to tap into the expertise of professionals who specialize in content creation and marketing strategies. These experts are equipped with the skills, tools, and experience necessary to craft high-quality content that resonates with target audiences. Whether it’s blog posts, social media updates, or email newsletters, professional content marketers understand how to write compelling copy that engages readers and drives results. For Linux/Unix focused content, this might include experts who understand shell scripting for automation or using tools like grep for SEO analysis. In addition, they are well-versed in SEO best practices, which means they can optimize content to rank higher in search engines, ultimately driving more traffic to your website. This level of expertise is difficult to replicate in-house, especially for small businesses with limited resources. 2. Cost EfficiencyFor many small businesses, hiring a full-time in-house marketing team may not be financially feasible. Content creation involves a range of tasks, from writing and editing to publishing and promoting. This can be a significant investment in terms of both time and money. By outsourcing content marketing services, small businesses can access the same level of expertise without the overhead costs associated with hiring additional employees. This can be especially true in the Linux/Unix world, where open-source tools can significantly reduce software costs. Outsourcing allows businesses to pay only for the services they need, whether it’s a one-off blog post or an ongoing content strategy. This flexibility can help businesses manage their budgets effectively while still benefiting from high-quality content marketing efforts. 3. Focus on Core Business FunctionsOutsourcing content marketing services frees up time for small business owners and their teams to focus on core business functions. Small businesses often operate with limited personnel, and each member of the team is usually responsible for multiple tasks. When content marketing is outsourced, the business can concentrate on what it does best—whether that’s customer service, product development, or sales—without getting bogged down in the complexities of content creation. For example, a Linux system administrator can focus on server maintenance instead of writing blog posts. This improved focus on core operations can lead to better productivity and business growth, while the outsourced content team handles the strategy and execution of the marketing efforts. 4. Consistency and ReliabilityOne of the key challenges of content marketing is maintaining consistency. Inconsistent content delivery can confuse your audience and hurt your brand’s credibility. Outsourcing content marketing services ensures that content is consistently produced, published, and promoted according to a set schedule. Whether it’s weekly blog posts or daily social media updates, a professional team will adhere to a content calendar, ensuring that your business maintains a strong online presence. This can be further enhanced by using automation scripts (common in Linux/Unix environments) to schedule and distribute content. Consistency is crucial for building a loyal audience, and a reliable content marketing team will ensure that your business stays top-of-mind for potential customers. 5. Access to Advanced Tools and TechnologiesEffective content marketing requires the use of various tools and technologies, from SEO and analytics platforms to content management systems and social media schedulers. Small businesses may not have the budget to invest in these tools or the time to learn how to use them effectively. Outsourcing content marketing services allows businesses to benefit from these advanced tools without having to make a significant investment. This could include access to specialized Linux-based SEO tools or experience with open-source CMS platforms like Drupal or WordPress. Professional content marketers have access to premium tools that can help with keyword research, content optimization, performance tracking, and more. These tools provide valuable insights that can inform future content strategies and improve the overall effectiveness of your marketing efforts. 6. ScalabilityAs small businesses grow, their content marketing needs will evolve. Outsourcing content marketing services provides the flexibility to scale efforts as necessary. Whether you’re launching a new product, expanding into new markets, or simply need more content to engage your growing audience, a content marketing agency can quickly adjust to your changing needs. This is especially relevant for Linux-based businesses that might experience rapid growth due to the open-source nature of their offerings. This scalability ensures that small businesses can maintain an effective content marketing strategy throughout their growth journey, without the need to continually hire or train new employees. ConclusionOutsourcing content marketing services is a smart move for small businesses looking to improve their online presence, engage with their target audience, and drive growth. By leveraging the expertise, cost efficiency, and scalability that outsourcing offers, small businesses can focus on what matters most—running their business—while leaving the content marketing to the professionals. Especially for businesses in the Linux/Unix ecosystem, this allows them to concentrate on technical development while expert marketers reach their specific audience. In a digital world where content is king, investing in high-quality content marketing services can make all the difference. The post Content Marketing for Linux/Unix Businesses: Why Outsourcing Makes Sense appeared first on Unixmen.
-
The Mistakes of CSS
by: Juan Diego Rodríguez Thu, 30 Jan 2025 14:31:08 +0000 Surely you have seen a CSS property and thought “Why?” For example: Or: You are not alone. CSS was born in 1996 (it can legally order a beer, you know!) and was initially considered a way to style documents; I don’t think anyone imagined everything CSS would be expected to do nearly 30 years later. If we had a time machine, many things would be done differently to match conventions or to make more sense. Heck, even the CSS Working Group admits to wanting a time-traveling contraption… in the specifications! If by some stroke of opportunity, I was given free rein to rename some things in CSS, a couple of ideas come to mind, but if you want more, you can find an ongoing list of mistakes made in CSS… by the CSS Working Group! Take, for example, background-repeat: Why not fix them? Sadly, it isn’t as easy as fixing something. People already built their websites with these quirks in mind, and changing them would break those sites. Consider it technical debt. This is why I think the CSS Working Group deserves an onslaught of praise. Designing new features that are immutable once shipped has to be a nerve-wracking experience that involves inexact science. It’s not that we haven’t seen the specifications change or evolve in the past — they most certainly have — but the value of getting things right the first time is a beast of burden. The Mistakes of CSS originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
FOSS Weekly #25.05: LibreOffice Tip, Launcher Customization, Moving Away from Google and More
by: Abhishek Prakash In the previous newsletter, I shared the new tools directory page proposal and asked for your feedback. From the responses I got, an overwhelming majority of FOSSers liked this idea. So I'll work on such pages. Since I want them to have some additional features, they will take a little longer. I'll inform you once they are live. Stay tuned 😄 Would you like to see more pages like this? 💬 Let's see what else you get in this edition A new Hyprland release. FSF's new commemorative logo. Microsoft's popular offering being handed a lawsuit. And other Linux news, tips and, of course, memes! This edition of FOSS Weekly is supported by ONLYOFFICE. ✨ONLYOFFICE PDF Editor: Create, Edit and Collaborate on PDFs on LinuxThe ONLYOFFICE suite now offers an updated PDF editor that comes equipped with collaborative PDF editing and other useful features. Deploy ONLYOFFICE Docs on your Linux server and integrate it with your favourite platform, such as Nextcloud, ownCloud, Drupal, Moodle, WordPress, Redmine and more. Alternatively, you can download the free desktop app for your Linux distro. Online PDF editor, reader and converter | ONLYOFFICE View and create PDF files from any text document, spreadsheet or presentation, convert PDF to DOCX online, create fillable PDF forms. ONLYOFFICE 📰 Linux and Open Source NewsHyprland 0.47.0 released with HDR support and squircles. Bitwarden has tightened security for accounts without 2FA enabled. Mozilla Thunderbird 134 has landed with a new notification system. A few offers on Data Privacy Day. Microsoft has launched DocumentDB, an open source document store platform. The Free Software Foundation (FSF) has unveiled a fresh logo to commemorate its forthcoming 40th anniversary. 🧠 What We’re Thinking AboutFacebook is banning the links from many Linux websites. Everything is Spam on Facebook Unless It is Paid Post (or Actual Spam) Linux websites are getting ill-treatment by Facebook. It's FOSS News Abhishek Microsoft's popular social media platform, LinkedIn, has been dragged to court over alleged misuse of user data. 🧮 Linux Tips, Tutorials and MoreMoving away from Google's ecosystem is a smart move in the long run. You can share files between guest and host operating system in GNOME boxes. Small tip on tracking changes and version control with LibreOffice. Learn to merge PDF files in Linux. And some tips on customizing the launcher in Ubuntu. 👷 Maker's and AI CornerRunning the impressive DeepSeek R1 AI model on a Raspberry Pi 5 is possible. I Ran Deepseek R1 on Raspberry Pi 5 and No, it Wasn’t 200 tokens/s Everyone is seeking Deepseek R1 these days. Is it really as good as everyone claims? Let me share my experiments of running it on a Raspberry Pi. It's FOSSAbhishek Kumar ✨ Apps highlightIf you like listening to audiobooks, then Cozy can be a great addition to your Linux system. Cozy: A Super Useful Open Source Audiobook Player for Linux Cozy makes audiobook listening easy with simple controls and an intuitive interface. It's FOSS NewsSourav Rudra Take your music anywhere with the open source Musify app. 🛍️ Deal You Would Love15 Linux and DevOps books for just $18 plus your purchase supports Code for America organization. Get them on Humble Bundle. Humble Tech Book Bundle: Linux from Beginner to Professional by O’Reilly Learn Linux with ease using this library of coding and programming courses by O’Reilly. Pay what you want & support Code For America. Humble Bundle 🎟️ Event alertFoss FEST 2025 is open for registration. Groups of international students can participate in the hackathon and win prizes worth 4,000 euros. It's FOSS is an official media partner for this event. Foss FEST 2025: International Hackathon OpenSource Science B.V. 🧩 Quiz TimeCan you beat this Linux Directory Structure puzzle? Linux Directory Structure: Puzzle The Linux directory structure is fascinating and an important thing to know about. Take a guess to solve this puzzle! It's FOSSAnkush Das 💡 Quick Handy TipYou can easily open new windows for running apps by either Middle-Clicking or Ctrl+Left-Clicking on the app from the dock. It also works for apps that are not running. Usually, the apps open in the same workspace, however in multi-monitor setups, this might open new app windows on the other monitor. 🤣 Meme of the WeekWindows got destroyed hard. 🤭 🗓️ Tech TriviaApple launched the iPad on April 3, 2010, redefining mobile computing with its touch-based, versatile design. It bridged the gap between smartphones and laptops, setting the standard for tablets. The iPad's success has reshaped the tech world and inspired countless imitators. 🧑🤝🧑 FOSSverse CornerPro FOSSer, Daniel is showcasing his Gentoo virtual machine setup on his laptop. Gentoo vm install on my laptop Up early this morning putting the finishing touches to Gentoo VM, to reboot to the CLI!! I have compiled two kernels, a gentoo-source, that I did a manual compile and a gentoo-kernel-dis for backup!! If the gentoo-source kernel works, I will nuke the gentoo-kernel. Just for fun, take a look-see Been awhile since we have had a 8 inch snow!!! It's FOSS CommunityDaniel_Phillips ❤️ With loveShare it with your Linux-using friends and encourage them to subscribe (hint: it's here). Share the articles in Linux Subreddits and community forums. Follow us on Google News and stay updated in your News feed. Opt for It's FOSS Plus membership and support us 🙏 Enjoy FOSS 😄
-
LHB Linux Digest #25.02: Linux Books, Watchtower, ReplicaSets and More
by: Abhishek Prakash Wed, 29 Jan 2025 20:04:25 +0530 What's in a name? Sometimes the name can be deceptive. For example, in the Linux Tips and Tutorials section of this newsletter, I am sharing a few commands that have nothing to do with what their name indicates 😄 Here are the other highlights of this edition of LHB Linux Digest: Nice and renice commandsReplicaSet in KubernetesSelf hosted code snippet managerAnd more tools, tips and memes for youThis edition of LHB Linux Digest newsletter is supported by RELIANOID.❇️Comprehensive Load Balancing Solutions For Modern NetworksRELIANOID’s load balancing solutions combine the power of SD-WAN, secure application delivery, and elastic load balancing to optimize traffic distribution and ensure unparalleled performance. With features like a robust Web Application Firewall (WAF) and built-in DDoS protection, your applications remain secure and resilient against cyber threats. High availability ensures uninterrupted access, while open networking and user experience networking enhance flexibility and deliver a seamless experience across all environments, from on-premises to cloud. Free Load Balancer Download | Community Edition by RELIANOIDDiscover our Free Load Balancer | Community Edition | The best Open Source Load Balancing software for providing high availability and content switching servicesRELIANOIDAdmin📖 Linux Tips and TutorialsThere is an install command in Linux but it doesn't install anythingThere is a hash command in Linux but it doesn't have anything to do with hash-passwordThere is a tree command in Linux but it has nothing to do with plantsThere is a wc command in Linux and it has nothing to do with washrooms 🚻 (I understand that you know what wc stands for in the command but I still find it amusing)Using nice and renice commands to change process priority. Change Process Priority WIth nice and renice CommandsYou can modify if a certain process should get priority in consuming CPU with nice and renice commands.Linux HandbookHelder This post is for subscribers only Subscribe now Already have an account? Sign in