
Everything posted by Blogger
-
Thea
by: aiparabellum.com Fri, 17 Jan 2025 02:59:35 +0000 https://www.theastudy.com/?referralCode=aipara Thea Study is a revolutionary AI-powered platform designed to optimize studying and learning for students of all levels. With its user-friendly interface and cutting-edge technology, Thea serves as a personalized study companion that adapts to your learning style. Whether you’re preparing for standardized tests, mastering school subjects, or needing quick summaries of your notes, Thea offers an innovative solution tailored to your needs. Completely free until April 15, 2025, Thea is transforming how students prepare for academic success. Features of Thea Thea offers a robust suite of features aimed at enhancing study efficiency and understanding: Smart Study Practice with varied questions using the Socratic method to improve comprehension. Gain deeper understanding through dynamic, interactive learning. Flashcards Create instant, interactive flashcards for effective memorization. Review on-the-go with engaging games and activities. Summarize Upload study materials, and Thea generates concise summaries in seconds. Break down complex content into manageable, digestible parts. Study Guides Generate comprehensive study guides effortlessly. Download guides instantly for offline use. Test Simulation Experience real exam conditions with Thea’s test environment. Reduce test anxiety and enhance readiness for the big day. Spaced Repetition Utilize scientifically-backed learning techniques to strengthen long-term memory. Review material at optimal intervals for maximum retention. Language Support Access Thea in over 80 languages for a truly global learning experience. Customizable Difficulty Levels Adjust question difficulty to match your learning needs, from beginner to advanced. How It Works Thea is designed to be intuitive and easy to use. Here’s how it works: Create a free account to access all features. Start by uploading study materials or selecting a subject. Choose the desired study feature: flashcards, summaries, or tests. Let Thea generate personalized content tailored to your input. Practice using interactive tools like Smart Study or Test Simulation. Track your progress and refine your approach based on performance. Benefits of Thea Thea offers numerous advantages for students aiming to optimize their academic efforts: Time Efficiency Thea reduces study time by offering precise, ready-to-use materials. Stress Reduction Simulated test environments and organized study guides help alleviate anxiety. Personalized Learning Adaptive features cater to individual learning styles and needs. Accessibility Completely free until 2025, making it accessible to all learners globally. Versatility Suitable for various subjects, including math, history, biology, and more. Global Reach Supports multiple languages and educational systems worldwide. Pricing Thea is currently free to use with no paywalls, ensuring accessibility to students worldwide. This free access is guaranteed until at least April 15, 2025. Pricing details for future plans are yet to be finalized, but the platform’s affordability will remain a priority. Thea Review Students and educators worldwide highly praise Thea for its innovative and effective approach to studying. Testimonials highlight its ability to improve grades, reduce stress, and make learning engaging. Users appreciate features like the instant flashcards, test simulation, and comprehensive summaries, which set Thea apart from other study platforms. The platform is often described as a game-changer that combines advanced AI with simplicity and usability. Conclusion Thea Study is redefining how students approach learning by offering a comprehensive, AI-powered study solution. From personalized content to real exam simulations, Thea ensures that every student can achieve their academic goals with ease. Whether you’re preparing for AP exams, IB tests, or regular coursework, Thea’s innovative tools will save you time and enhance your understanding. With free access until 2025, there’s no better time to explore Thea as your ultimate study companion. Visit Website The post Thea appeared first on AI Parabellum.
-
Thea
by: aiparabellum.com Fri, 17 Jan 2025 02:59:35 +0000 https://www.theastudy.com/?referralCode=aipara Thea Study is a revolutionary AI-powered platform designed to optimize studying and learning for students of all levels. With its user-friendly interface and cutting-edge technology, Thea serves as a personalized study companion that adapts to your learning style. Whether you’re preparing for standardized tests, mastering school subjects, or needing quick summaries of your notes, Thea offers an innovative solution tailored to your needs. Completely free until April 15, 2025, Thea is transforming how students prepare for academic success. Features of Thea Thea offers a robust suite of features aimed at enhancing study efficiency and understanding: Smart Study Practice with varied questions using the Socratic method to improve comprehension. Gain deeper understanding through dynamic, interactive learning. Flashcards Create instant, interactive flashcards for effective memorization. Review on-the-go with engaging games and activities. Summarize Upload study materials, and Thea generates concise summaries in seconds. Break down complex content into manageable, digestible parts. Study Guides Generate comprehensive study guides effortlessly. Download guides instantly for offline use. Test Simulation Experience real exam conditions with Thea’s test environment. Reduce test anxiety and enhance readiness for the big day. Spaced Repetition Utilize scientifically-backed learning techniques to strengthen long-term memory. Review material at optimal intervals for maximum retention. Language Support Access Thea in over 80 languages for a truly global learning experience. Customizable Difficulty Levels Adjust question difficulty to match your learning needs, from beginner to advanced. How It Works Thea is designed to be intuitive and easy to use. Here’s how it works: Create a free account to access all features. Start by uploading study materials or selecting a subject. Choose the desired study feature: flashcards, summaries, or tests. Let Thea generate personalized content tailored to your input. Practice using interactive tools like Smart Study or Test Simulation. Track your progress and refine your approach based on performance. Benefits of Thea Thea offers numerous advantages for students aiming to optimize their academic efforts: Time Efficiency Thea reduces study time by offering precise, ready-to-use materials. Stress Reduction Simulated test environments and organized study guides help alleviate anxiety. Personalized Learning Adaptive features cater to individual learning styles and needs. Accessibility Completely free until 2025, making it accessible to all learners globally. Versatility Suitable for various subjects, including math, history, biology, and more. Global Reach Supports multiple languages and educational systems worldwide. Pricing Thea is currently free to use with no paywalls, ensuring accessibility to students worldwide. This free access is guaranteed until at least April 15, 2025. Pricing details for future plans are yet to be finalized, but the platform’s affordability will remain a priority. Thea Review Students and educators worldwide highly praise Thea for its innovative and effective approach to studying. Testimonials highlight its ability to improve grades, reduce stress, and make learning engaging. Users appreciate features like the instant flashcards, test simulation, and comprehensive summaries, which set Thea apart from other study platforms. The platform is often described as a game-changer that combines advanced AI with simplicity and usability. Conclusion Thea Study is redefining how students approach learning by offering a comprehensive, AI-powered study solution. From personalized content to real exam simulations, Thea ensures that every student can achieve their academic goals with ease. Whether you’re preparing for AP exams, IB tests, or regular coursework, Thea’s innovative tools will save you time and enhance your understanding. With free access until 2025, there’s no better time to explore Thea as your ultimate study companion. Visit Website The post Thea appeared first on AI Parabellum.
-
Fixin AttributeError: module ‘pkgutil’ has no attribute ‘ImpImporter’
By: Joshua Njiru (cleaned up by ChatGPT) Thu, 16 Jan 2025 19:44:28 +0000 Understanding the ErrorThe error "AttributeError: module ‘pkgutil’ has no attribute ‘ImpImporter’" typically occurs in Python code that attempts to use the pkgutil module to access ImpImporter. This happens because ImpImporter was removed in Python 3.12 as part of the deprecation of the old import system. Root CauseThe removal of ImpImporter is due to: The deprecation of the imp module in favor of importlib The modernization of Python’s import system Changes in Python 3.12 that eliminate legacy import mechanisms Solutions to Fix the ErrorSolution 1: Update Your Code to Use importlibReplace pkgutil.ImpImporter with the modern importlib equivalent: Old Code:from pkgutil import ImpImporter New Code:from importlib import machinery importer = machinery.FileFinder(path, *machinery.FileFinder.path_hook_for_FileFinder()) Solution 2: Use ZipImporter InsteadIf you’re working with ZIP archives, use ZipImporter from pkgutil. Old Code:from pkgutil import ImpImporter New Code:from pkgutil import ZipImporter importer = ZipImporter('/path/to/your/zipfile.zip') Solution 3: Downgrade Python VersionIf updating the code isn't possible, downgrade to Python 3.11: Create a virtual environment with Python 3.11: python3.11 -m venv env source env/bin/activate # On Unix env\Scripts\activate # On Windows Install your dependencies: pip install -r requirements.txt Code Examples for Common Use CasesExample 1: Module DiscoveryModern approach using importlib: from importlib import util, machinery def find_module(name, path=None): spec = util.find_spec(name, path) if spec is None: return None return spec.loader Example 2: Package Resource AccessUsing importlib.resources: from importlib import resources def get_package_data(package, resource): with resources.path(package, resource) as path: return path Prevention TipsAlways check Python version compatibility when using import-related functionality Use importlib instead of pkgutil for new code Keep dependencies updated Test code against new Python versions before upgrading Common PitfallsMixed Python versions in different environments Old dependencies that haven’t been updated Copying legacy code without checking compatibility Long-Term SolutionsMigrate to importlib completely Update all package loading code to use modern patterns Implement proper version checking in your application Checking Your EnvironmentRun the following diagnostic code to check your setup: import sys import importlib def check_import_system(): print(f"Python version: {sys.version}") try: print(f"Importlib version: {importlib.__version__}") except AttributeError: print("Importlib does not have a version attribute.") print("\nAvailable import mechanisms:") for attr in dir(importlib.machinery): if attr.endswith('Loader') or attr.endswith('Finder'): print(f"- {attr}") if __name__ == "__main__": check_import_system() More Articles from UnixmenFixing OpenLDAP Error: ldapadd Undefined Attribute Type (17) Using the cp Command to Copy a Directory on Linux The post Fixing "AttributeError: module ‘pkgutil’ has no attribute ‘ImpImporter’" appeared first on Unixmen.
-
How to Install Arch Linux
By: Joshua Njiru Thu, 16 Jan 2025 19:42:43 +0000 Arch Linux is a popular Linux distribution for experienced users. It’s known for its rolling release model, which means you’re always using the latest software. However, Arch Linux can be more challenging to install and maintain than other distributions. This article will walk you through the process of installing Arch Linux, from preparation to first boot. Follow each section carefully to ensure a successful installation. PrerequisitesBefore beginning the installation, it is crucial to ensure that you have: A USB drive (minimum 4GB) Internet connection Basic knowledge of command line operations At least 512MB RAM (2GB recommended) 20GB+ free disk space Backed up important data Creating Installation MediaDownload the latest ISO from archlinux.org Verify the ISO signature for security Create bootable USB using dd command: <span class="token">sudo</span> <span class="token">dd</span> <span class="token assign-left">bs</span><span class="token">=</span>4M <span class="token assign-left">if</span><span class="token">=</span>/path/to/archlinux.iso <span class="token assign-left">of</span><span class="token">=</span>/dev/sdx <span class="token assign-left">status</span><span class="token">=</span>progress <span class="token assign-left">oflag</span><span class="token">=</span>sync Boot PreparationEnter BIOS/UEFI settings Disable Secure Boot Set boot priority to USB Save and exit What are the Initial Boot Steps?Boot from USB and select “Arch Linux install medium” Verify boot mode: ls /sys/firmware/efi/efivarsInternet ConnectionFor wired connection:ip link dhcpcdFor wireless:iwctl station wlan0 scan station wlan0 connect SSIDVerify connection:ping archlinux.orgSystem ClockUpdate the system clock:timedatectl set-ntp trueDisk PartitioningList available disks: lsblkCreate partitions (example using fdisk): fdisk /dev/sdaFor UEFI systems:EFI System Partition (ESP): 512MB Root partition: Remaining space Swap partition (optional): Equal to RAM size For Legacy BIOS:Root partition: Most of the disk Swap partition (optional) Format partitions: # For EFI partition mkfs.fat -F32 /dev/sda1 # For root partition mkfs.ext4 /dev/sda2 # For swap mkswap /dev/sda3 swapon /dev/sda3Mounting Partitions# Mount root partition: mount /dev/sda2 /mnt # For UEFI systems, mount ESP: mkdir /mnt/boot mount /dev/sda1 /mnt/bootBase System InstallationInstall essential packages: pacstrap /mnt base linux linux-firmware base-develSystem ConfigurationGenerate fstab: genfstab -U /mnt <> /mnt/etc/fstabChange root into the new system: arch-chroot /mntSet timezone: ln -sf /usr/share/zoneinfo/Region/City /etc/localtime hwclock --systohcConfigure locale: nano /etc/locale.gen # Uncomment en_US.UTF-8 UTF-8 locale-gen echo "LANG=en_US.UTF-8" > /etc/locale.confSet hostname: echo "myhostname" > /etc/hostnameConfigure hosts file: nano /etc/hosts # Add 127.0.0.1 localhost ::1 localhost 127.0.1.1 myhostname.localdomain myhostnameBoot Loader InstallationFor GRUB on UEFI systems: pacman -S grub efibootmgr grub-install --target=x86_64-efi --efi-directory=/boot --bootloader-id=GRUB grub-mkconfig -o /boot/grub/grub.cfgFor GRUB on Legacy BIOS: pacman -S grub grub-install --target=i386-pc /dev/sda grub-mkconfig -o /boot/grub/grub.cfgNetwork ConfigurationInstall network manager: pacman -S networkmanager systemctl enable NetworkManagerUser ManagementSet root password: passwdCreate user account: useradd -m -G wheel username passwd usernameConfigure sudo: EDITOR=nano visudo # Uncomment %wheel ALL=(ALL) ALLFinal StepsExit chroot: exitUnmount partitions: umount -R /mntReboot: rebootPost-InstallationAfter first boot:Install graphics drivers: pacman -S xf86-video-amdgpu # For AMD pacman -S nvidia nvidia-utils # For NVIDIAInstall desktop environment (example with GNOME): pacman -S xorg gnome systemctl enable gdmInstall common applications: pacman -S firefox terminal-emulator file-managerTroubleshooting TipsIf bootloader fails to install, verify EFI variables are available For wireless issues, ensure firmware is installed Check logs with journalctl for error messages Verify partition mounts with lsblk Maintenance RecommendationsRegular system updates: pacman -SyuClean package cache periodically: pacman -ScCheck system logs regularly: journalctl -p 3 -xbMore Articles from Unixmen https://www.unixmen.com/minimal-tools-on-arch-linux/ https://www.unixmen.com/top-things-installing-arch-linux/ The post How to Install Arch Linux appeared first on Unixmen.
- Taking Screenshots in Hyprland
-
FOSS Weekly #25.03: Mint 22.1 Released, AI in VLC, Dual Boot Myths, Torvalds' Guitar Offer and More
by: Abhishek Prakash Linux Mint 22.1 codenamed Xia is available now. I expected this point release to arrive around Christmas. But it got delayed a little, if I can call it a delay, as there are no fixed release schedule. Wondering what's new in Mint 22.1? Check this out 👇 6 Exciting Features in Linux Mint 22.1 ‘Xia’ Release Linux Mint’s latest upgrade is available. Explore more about it before you try it out! It's FOSS NewsAnkush Das And the Tuxmas lifetime membership offer is now over. We reached the milestone of 100 lifetime Plus members. Thank you for your support 🙏 💬 Let's see what else you get in this edition A new Raspberry Pi 5 variant. AI coming to VLC media player. Nobara being the first one to introduce a release in 2025. And other Linux news, videos and, of course, memes! This edition of FOSS Weekly is supported by PikaPods. ❇️ PikaPods: Self-hosting Without HasslePikaPods allows you to quickly deploy your favorite open source software. All future updates are handled automatically by PikaPods while you enjoy using the software. Did I tell you that they also share revenue with the original developers of the software? Oh! You also get a $5 free credit to try it out and see if you can rely on PikaPods. PikaPods - Instant Open Source App Hosting Run the finest Open Source web apps from $1/month, fully managed, no tracking, no ads, full privacy. Self-hosting was never this convenient. Instant Open Source App Hosting 📰 Linux and Open Source NewsMicrosoft's Phi-4 AI model has been made open source. The Raspberry Pi 5 now has a 16 GB variant for power users. COSMIC Alpha 5 serves as a reminder of how the desktop environment is progressing. Flatpak version 1.16 is released with new features Earlier, Kdenlive introduced AI feature and now VLC is adding AI subtitles. AI Subtitles Are Coming to VLC— Get Ready! VLC is adding the ability to generate subtitles with the help of AI. It's FOSS NewsSourav Rudra 🧠 What We’re Thinking AboutLinus Torvalds is proposing to build a guitar effects pedal for one lucky kernel contributor. Linus Torvalds offers to build free guitar effects pedal ‘I’m a software person with a soldering iron’, he warns alongside release of Linux 6.13-rc7 The RegisterSimon Sharwood 🧮 Linux Tips, Tutorials and MoreLevel up your Gedit experience with these 10 tweaks. Using your phone's camera and mic in Ubuntu is possible. You can run Windows apps on Linux by following this beginner's guide. Don’t Believe These Dual Boot Myths Don’t listen to what you hear. I tell you the reality from my dual booting experience. It's FOSSAnkush Das 👷 Maker's and AI CornerArmSoM AIM7 sets the stage for cutting-edge AI applications. ArmSoM AIM7: A Promising Rockchip Device for AI Development Harness the power of RK3588 Rockchip processor for AI development with ArmSoM RK3588 AI Module 7 (AIM7) AI kit. It's FOSSAbhishek Kumar Usenet was where conversations took place before social media came about. Remembering Usenet - The OG Social Network that Existed Even Before the World Wide Web Before Facebook, before MySpace and even before the Word Wide Web, there existed Usenet. From LOL to Linux, we owe a lot to Usenet. It's FOSSBill Dyer 📹 Videos we are watchingSubscribe to our YouTube channel ✨ Apps of the WeekWhat's so clever about KleverNotes? Find out: KleverNotes Is A Practical Markdown Note-Taking App By KDE That’s a clever markdown-powered editor. Give it a try! It's FOSS NewsSourav Rudra 🛍️Deal You Would LoveChallenge your brain and have a blast learning with these acclaimed logic and puzzle games exploring key concepts of programming and machine learning. New Year, New You: Programming Games Have fun learning about programming and machine learning in this puzzle and logic game bundle featuring while True: learn(), 7 Billion Humans, and more. Humble Bundle 🧩 Quiz TimeHere's a fun crossword for correctly guessing the full forms of the mentioned acronyms. Expand the Short form: Crossword It’s time for you to solve a crossword! It's FOSSAnkush Das 💡 Quick Handy TipYou can search for free icons from Font Awesome or Nerd Fonts to add to panels and terminal tools like Fastfetch. Ensure that you install the respective fonts, font-awesome and firacode-nerd on your system before using. Otherwise, they won't appear properly. On Font Awesome, click on Copy Glyph to copy the icon to the clipboard. And in Nerd Fonts, click on the Icons button to copy the icon to the clipboard. 🤣 Meme of the WeekYep, that happens. 😆 🗓️ Tech TriviaWikipedia launched on January 15, 2001, as a free, collaborative encyclopedia. Created by Jimmy Wales and Larry Sanger, it grew to host millions of articles in multiple languages. Today, it’s one of the most visited websites globally, embodying the spirit of open knowledge. 🧑🤝🧑 FOSSverse CornerWould it be possible to learn to code after 50? Community members share their views and experience. 50+ and learning to code...? I’m over 50 (became 50 on 28 October 2024) and, while I did do some coding in a grey past, I noticed I’m currently finding it difficult to pick it up. There’s so much to learn: Coding (in my case C++) The API of the relevant libraries I intend to use for my Amazing FLOSS Project™. 🙂 (In my case FLTK, and yaml-cpp). The language of the build system (CMake in my case). Some editor like thing with some creature comforts (in my case I’m going with sublime text). Git (including how to… It's FOSS Communityxahodo ❤️ With loveShare it with your Linux-using friends and encourage them to subscribe (hint: it's here). Share the articles in Linux Subreddits and community forums. Follow us on Google News and stay updated in your News feed. Opt for It's FOSS Plus membership and support us 🙏 Enjoy FOSS 😄
-
ZHS Autosuggestion
By: Joshua Njiru Wed, 15 Jan 2025 18:21:02 +0000 Mastering zsh-autosuggestions: A Comprehensive GuideBy: Joshua Njiru Date: Wednesday, January 15, 2025 Working in the terminal can become significantly more efficient with the right tools. One such powerful plugin is zsh-autosuggestions, designed for the Z shell (zsh). This guide covers everything you need to know to harness the full potential of this productivity-enhancing tool. What Is zsh-autosuggestions?zsh-autosuggestions is a plugin for zsh that offers command suggestions as you type. These suggestions are based on your command history and completions, appearing in light gray text. You can accept them with the right arrow key or other configured keybindings, streamlining command-line navigation and reducing typing errors. Key BenefitsThe plugin provides several advantages, making it a favorite among developers and system administrators: Minimizes typing errors by suggesting previously used commands. Speeds up command-line navigation with fewer keystrokes. Simplifies recall of complex commands you've used before. Provides instant feedback as you type. Integrates seamlessly with other zsh plugins and frameworks. Installation GuideYou can install zsh-autosuggestions through various methods based on your setup. Using Oh My ZshIf you are using Oh My Zsh, follow these steps: Clone the repository into your Oh My Zsh plugins directory: git clone https://github.com/zsh-users/zsh-autosuggestions ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-autosuggestions Add the plugin to your .zshrc file: plugins=(... zsh-autosuggestions) Apply the changes by restarting your terminal or running: source ~/.zshrc Manual InstallationFor a manual installation: Clone the repository: git clone https://github.com/zsh-users/zsh-autosuggestions ~/.zsh/zsh-autosuggestions Add the following line to your .zshrc file: source ~/.zsh/zsh-autosuggestions/zsh-autosuggestions.zsh Apply the changes: source ~/.zshrc Configuration Optionszsh-autosuggestions is highly customizable. Here are some essential options: Changing Suggestion StrategyYou can control how suggestions are generated: ZSH_AUTOSUGGEST_STRATEGY=(history completion) Customizing AppearanceModify the suggestion color to match your preferences: ZSH_AUTOSUGGEST_HIGHLIGHT_STYLE='fg=8' Modifying Key BindingsSet custom keys for accepting suggestions: bindkey '^[[' autosuggest-accept # Alt + Enter Tips for Maximum ProductivityUse partial suggestions: Start typing a command and watch suggestions appear. Combine with fuzzy finding: Install fzf for advanced command-line search. Customize strategies: Adjust suggestion settings to suit your workflow. Master shortcuts: Learn keybindings to quickly accept suggestions. Troubleshooting Common IssuesSlow PerformanceClean up your command history. Adjust the suggestion strategy. Update to the latest version of the plugin. Suggestions Not AppearingEnsure the plugin is sourced correctly in your .zshrc. Verify terminal color support. Check for conflicts with other plugins. Advanced FeaturesCustom Suggestion StrategiesYou can create your own suggestion logic: ZSH_AUTOSUGGEST_STRATEGY=custom_strategy function custom_strategy() { # Custom suggestion logic here } Integration with Other Toolszsh-autosuggestions pairs well with: fzf (fuzzy finder) zsh-syntax-highlighting zsh-completions zsh-autosuggestions is a powerful addition to your terminal workflow. By taking the time to configure and explore its features, you can significantly enhance your productivity and efficiency. Related Articles from Unixmen: Linux Shell Scripting Part 2: Message Displaying, User Variables, and Environment Variables Linux Shell Scripting Part 1: Starting with Linux Shell Scripting Bash String Comparison: Comparing Strings in Shell Scripts The post "ZSH Autosuggestions" appeared first on Unixmen. This rewrite streamlines the structure, removes redundant formatting issues, and presents the content in a clear, professional manner.
-
SSH Max Limits and Optimization
By: Joshua Njiru Wed, 15 Jan 2025 17:38:03 +0000 SSH (Secure Shell) is a powerful tool for remote administration and secure data transfer. However, it’s crucial to understand and configure its limits effectively to ensure optimal performance and security. This article will help you understand and configure SSH max limits for optimal performance and security. Connection LimitsConnection limits in SSH, primarily controlled by settings like MaxStartups and MaxSessions , are crucial security measures. MaxStartups restricts the number of unauthenticated connection attempts, mitigating brute-force attacks. MaxSessions limits the number of active sessions per connection, preventing resource exhaustion and potential DoS attacks. These limits, along with other security measures like key-based authentication and firewall rules, contribute to a robust and secure SSH environment. SSH Max SessionsDefault: 10 Location: /etc/ssh/sshd_config Controls maximum number of simultaneous SSH sessions per connection MaxSessions <span class="token">10</span> SSH Max StartupsFormat: start:rate:full Default: 10:30:100 Controls unauthenticated connection attempts MaxStartups <span class="token">10</span>:30:100 <span class="token"># Allows 10 unauthenticated connections</span> <span class="token"># 30% probability of dropping connections when limit reached</span> <span class="token"># Full blocking at 100 connections</span> Client Alive IntervalDefault: 0 (disabled) Maximum: System dependent Checks client connectivity every X seconds ClientAliveInterval <span class="token">300</span> Client Alive Count MaxDefault: 3 Maximum connection check attempts before disconnecting ClientAliveCountMax <span class="token">3</span> Authentication LimitsAuthentication limits in SSH primarily focus on restricting the number of failed login attempts. This helps prevent brute-force attacks where attackers systematically try various combinations of usernames and passwords to gain unauthorized access. By setting limits on the number of authentication attempts allowed per connection, you can significantly increase the difficulty for attackers to successfully compromise your system. MaxAuthTriesDefault: 6 Maximum authentication attempts before disconnecting MaxAuthTries <span class="token">6</span> LoginGraceTimeDefault: 120 seconds Time allowed for successful authentication LoginGraceTime <span class="token">120</span> System Resource LimitsSystem-wide LimitsEdit /etc/security/limits.conf : * soft nofile <span class="token">65535</span> * hard nofile <span class="token">65535</span> Process Limits <span class="token"># Check current limits</span> <span class="token">ulimit</span> -n # Set new limit ulimit -n 65535 Bandwidth LimitsBandwidth limits in SSH, while not directly configurable within the SSH protocol itself, are an important consideration for overall system performance. Excessive SSH traffic can consume significant network resources, potentially impacting other applications and services. Individual User Limits<span class="token"># In sshd_config</span> Match User username RateLimit 5M Global Rate LimitingUsing iptables: iptables -A INPUT -p tcp --dport <span class="token">22</span> -m state --state NEW -m limit --limit <span class="token">10</span>/minute -j ACCEPT Performance OptimizationCompression Settings<span class="token"># In sshd_config</span> Compression delayed Cipher Selection<span class="token"># Faster ciphers first</span> Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com Keep Alive SettingsClient-side ( ~/.ssh/config ): Host * ServerAliveInterval <span class="token">60</span> ServerAliveCountMax <span class="token">3</span> File Transfer LimitsSFTP LimitsIn sshd_config : Subsystem <span class="token">sftp</span> /usr/lib/openssh/sftp-server -l INFO -f LOCAL6 Match Group sftpusers ChrootDirectory /sftp/%u ForceCommand internal-sftp AllowTcpForwarding no SCP Limits<span class="token"># Limit SCP bandwidth</span> <span class="token">scp</span> -l <span class="token">1000</span> <span class="token"># Limits bandwidth to 1000 Kbit/s</span> Security MaximumsSSH security maximums encompass various settings designed to thwart malicious attacks. Key Size LimitsRSA: 16384 bits (practical max) ECDSA: 521 bits Ed25519: 256 bits (fixed) Authentication Timeout<span class="token"># In sshd_config</span> AuthenticationMethods publickey,keyboard-interactive MaxAuthTries <span class="token">3</span> LoginGraceTime <span class="token">60</span> Monitoring and LoggingLogging Levels<span class="token"># In sshd_config</span> LogLevel VERBOSE SyslogFacility AUTH Connection Monitoring<span class="token"># Active connections</span> <span class="token">who</span> <span class="token">|</span> <span class="token">grep</span> pts <span class="token"># SSH processes</span> <span class="token">ps</span> aux <span class="token">|</span> <span class="token">grep</span> <span class="token">ssh</span> <span class="token"># Connection attempts</span> <span class="token">tail</span> -f /var/log/auth.log TroubleshootingCheck Current Limits <span class="token"># System limits</span> sysctl -a <span class="token">|</span> <span class="token">grep</span> max # SSH daemon limits sshd -T | grep max # Process limits cat /proc/sys/fs/file-max Common Issues and SolutionsToo Many Open Files <span class="token"># Check current open files</span> <span class="token">lsof</span> <span class="token">|</span> <span class="token">grep</span> sshd <span class="token">|</span> <span class="token">wc</span> -l <span class="token"># Increase system limit</span> <span class="token">echo</span> <span class="token">"fs.file-max = 100000"</span> <span class="token">>></span> /etc/sysctl.conf sysctl -p Connection Drops <span class="token"># Add to sshd_config</span> TCPKeepAlive <span class="token">yes</span> ClientAliveInterval <span class="token">60</span> ClientAliveCountMax <span class="token">3</span> Best PracticesRegular Monitoring <span class="token"># Create monitoring script</span> <span class="token">#!/bin/bash</span> <span class="token">echo</span> <span class="token">"Active SSH connections: </span><span class="token">$(</span><span class="token">netstat</span><span class="token"> -tnpa </span><span class="token">|</span> <span class="token">grep</span> <span class="token">'ESTABLISHED.*sshd'</span> <span class="token">|</span> <span class="token">wc</span><span class="token"> -l</span><span class="token">)</span><span class="token">"</span> <span class="token">echo</span> <span class="token">"Failed attempts: </span><span class="token">$(</span><span class="token">grep</span> <span class="token">"Failed password"</span><span class="token"> /var/log/auth.log </span><span class="token">|</span> <span class="token">wc</span><span class="token"> -l</span><span class="token">)</span><span class="token">"</span> Automated Cleanup <span class="token"># Add to crontab</span> <span class="token">0</span> * * * * <span class="token">pkill</span> -o sshd Remember to always backup configuration files before making changes and test in a non-production environment first. Similar Articles from Unixmen The post SSH Max Limits and Optimization appeared first on Unixmen.
-
How to Add Guests in VirtualBox
By: Joshua Njiru Wed, 15 Jan 2025 17:18:37 +0000 What are VirtualBox Guest Additions? VirtualBox Guest Additions is a software package that enhances the functionality of virtual machines running in Oracle VM VirtualBox. It consists of device drivers and system applications that optimize the guest operating system for better performance and usability. Benefits of Installing Guest Additions Installing Guest Additions provides several key benefits: Enhanced Display Integration Automatic screen resolution adjustment Support for higher display resolutions Seamless window integration Improved Performance Hardware-accelerated graphics Mouse pointer integration Shared clipboard functionality Additional Features Shared folders between host and guest Seamless windows mode Time synchronization Better audio support Prerequisites for Installation Before installing Guest Additions, ensure you have: VirtualBox installed and updated to the latest version A running virtual machine Administrative privileges in the guest OS Sufficient disk space (approximately 200MB) Development tools or build essentials (for Linux guests) Installing Guest Additions on Windows Start your Windows virtual machine From the VirtualBox menu, select “Devices” → “Insert Guest Additions CD image” When AutoRun appears, click “Run VBoxWindowsAdditions.exe” Follow the installation wizard: Accept the default options Allow the installation of drivers when prompted Restart the virtual machine when finished Installing Guest Additions on Linux Install required packages: <span class="token"># For Ubuntu/Debian</span> <span class="token">sudo</span> <span class="token">apt-get</span> update <span class="token">sudo</span> <span class="token">apt-get</span> <span class="token">install</span> build-essential dkms linux-headers-<span class="token">$(</span><span class="token">uname</span><span class="token"> -r</span><span class="token">)</span> <span class="token"># For Fedora/RHEL</span> <span class="token">sudo</span> dnf <span class="token">install</span> gcc kernel-devel kernel-headers dkms <span class="token">make</span> <span class="token">bzip2</span> Insert Guest Additions CD: Click “Devices” → “Insert Guest Additions CD image” Mount and install: <span class="token">sudo</span> <span class="token">mount</span> /dev/cdrom /mnt <span class="token">cd</span> /mnt <span class="token">sudo</span> ./VBoxLinuxAdditions.run Restart the virtual machine Installing Guest Additions on macOS Start your macOS virtual machine Select “Devices” → “Insert Guest Additions CD image” Mount the Guest Additions ISO if not automatically mounted Double-click the VBoxDarwinAdditions.pkg Follow the installation wizard Restart the virtual machine Common Features and How to Use Them Shared Folders Power off the virtual machine In VirtualBox Manager: Select your VM Click “Settings” → “Shared Folders” Add a new shared folder Drag and Drop In VM Settings: Go to “General” → “Advanced” Set “Drag’n’Drop” to Bidirectional Clipboard Sharing In VM Settings: Go to “General” → “Advanced” Set “Shared Clipboard” to Bidirectional Seamless Mode Press Host Key (usually Right Ctrl) + L Or select “View” → “Seamless Mode.” Troubleshooting Installation Issues What are Some of the Common Problems And Solutions? Installation Fails Verify system requirements Update VirtualBox to the latest version Install required development tools Screen Resolution Issues Restart the virtual machine Reinstall Guest Additions Check display adapter settings Shared Folders Not Working Add user to vboxsf group (Linux): <span class="token">sudo</span> <span class="token">usermod</span> -aG vboxsf <span class="token">$(</span><span class="token">whoami</span><span class="token">)</span> Verify mount points and permissions Building Kernel Modules Fails Install correct kernel headers Update the system Check system logs for specific errors Updating Guest Additions Check Current Version bash <span class="token"># On Linux</span> modinfo vboxguest <span class="token">|</span> <span class="token">grep</span> ^version <span class="token"># On Windows</span> Check Programs and Features Update Process Download latest VirtualBox version Update Guest Additions through “Devices” menu Reinstall following the same process as initial installation Best Practices Before Installation Take a snapshot of your VM Back up important data Update the guest OS After Installation Test all required features Configure shared folders and clipboard as needed Document any custom settings Maintenance Keep Guest Additions version matched with VirtualBox Regularly update both VirtualBox and Guest Additions Monitor system performance More Articles from Unixmen The post How to Add Guests in VirtualBox appeared first on Unixmen.
-
Web-Slinger.css: Across the Swiper-Verse
by: Lee Meyer Wed, 15 Jan 2025 15:03:25 +0000 My previous article warned that horizontal motion on Tinder has irreversible consequences. I’ll save venting on that topic for a different blog, but at first glance, swipe-based navigation seems like it could be a job for Web-Slinger.css, your friendly neighborhood experimental pure CSS Wow.js replacement for one-way scroll-triggered animations. I haven’t managed to fit that description into a theme song yet, but I’m working on it. In the meantime, can Web-Slinger.css swing a pure CSS Tinder-style swiping interaction to indicate liking or disliking an element? More importantly, will this experiment give me an excuse to use an image of Spider Pig, in response to popular demand in the bustling comments section of my previous article? Behold the Spider Pig swiper, which I propose as a replacement for captchas because every human with a pulse loves Spider Pig. With that unbiased statement in mind, swipe left or right below (only Chrome and Edge for now) to reveal a counter showing how many people share your stance on Spider Pig. CodePen Embed Fallback Broaden your horizons The crackpot who invented Web-Slinger.css seems not to have considered horizontal scrolling, but we can patch that maniac’s monstrous creation like so: [class^="scroll-trigger-"] { view-timeline-axis: x; } This overrides the default behavior for marker elements with class names using the Web-Slinger convention of scroll-trigger-n, which activates one-way, scroll-triggered animations. By setting the timeline axis to x, the scroll triggers only run when they are revealed by scrolling horizontally rather than vertically (which is the default). Otherwise, the triggers would run straightaway because although they are out of view due to the container’s width, they will all be above the fold vertically when we implement our swiper. My steps in laying the foundation for the above demo were to fork this awesome JavaScript demo of Tinder-style swiping by Nikolay Talanov, strip out the JavaScript and all the cards except for one, then import Web-Slinger.css and introduce the horizontal patch explained above. Next, I changed the card’s container to position: fixed, and introduced three scroll-snapping boxes side-by-side, each the height and width of the viewport. I set the middle slide to scroll-align: center so that the user starts in the middle of the page and has the option to scroll backwards or forwards. Sidenote: When unconventionally using scroll-driven animations like this, a good mindset is that the scrollable element needn’t be responsible for conventionally scrolling anything visible on the page. This approach is reminiscent of how the first thing you do when using checkbox hacks is hide the checkbox and make the label look like something else. We leverage the CSS-driven behaviors of a scrollable element, but we don’t need the default UI behavior. I put a div marked with scroll-trigger-1 on the third slide and used it to activate a rejection animation on the card like this: <div class="demo__card on-scroll-trigger-1 reject"> <!-- HTML for the card --> </div> <main> <div class="slide"> </div> <div id="middle" class="slide"> </div> <div class="slide"> <div class="scroll-trigger-1"></div> </div> </main> It worked the way I expected! I knew this would be easy! (Narrator: it isn’t, you’ll see why next.) <div class="on-scroll-trigger-2 accept"> <div class="demo__card on-scroll-trigger-2 reject"> <!-- HTML for the card --> </div> </div> <main> <div class="slide"> <div class="scroll-trigger-2"></div> </div> <div id="middle" class="slide"> </div> <div class="slide"> <div class="scroll-trigger-1"></div> </div> </main> After adding this, Spider Pig is automatically ”liked” when the page loads. That would be appropriate for a card that shows a person like myself who everybody automatically likes — after all, a middle-aged guy who spends his days and nights hacking CSS is quite a catch. By contrast, it is possible Spider Pig isn’t everyone’s cup of tea. So, let’s understand why the swipe right implementation would behave differently than the swipe left implementation when we thought we applied the same principles to both implementations. Take a step back This bug drove home to me what view-timeline does and doesn’t do. The lunatic creator of Web-Slinger.css relied on tech that wasn’t made for animations which run only when the user scrolls backwards. This visualizer shows that no matter what options you choose for animation-range, the subject wants to complete its animation after it has crossed the viewport in the scrolling direction — which is exactly what we do not want to happen in this particular case. Fortunately, our friendly neighborhood Bramus from the Chrome Developer Team has a cool demo showing how to detect scroll direction in CSS. Using the clever --scroll-direction CSS custom property Bramus made, we can ensure Spider Pig animates at the right time rather than on load. The trick is to control the appearance of .scroll-trigger-2 using a style query like this: :root { animation: adjust-slide-index 3s steps(3, end), adjust-pos 1s; animation-timeline: scroll(root x); } @property --slide-index { syntax: "<number>"; inherits: true; initial-value: 0; } @keyframes adjust-slide-index { to { --slide-index: 3; } } .scroll-trigger-2 { display: none; } @container style(--scroll-direction: -1) and style(--slide-index: 0) { .scroll-trigger-2 { display: block; } } That style query means that the marker with the .scroll-trigger-2 class will not be rendered until we are on the previous slide and reach it by scrolling backward. Notice that we also introduced another variable named --slide-index, which is controlled by a three-second scroll-driven animation with three steps. It counts the slide we are on, and it is used because we want the user to swipe decisively to activate the dislike animation. We don’t want just any slight breeze to trigger a dislike. When the swipe has been concluded, one more like (I’m superhuman) As mentioned at the outset, measuring how many CSS-Tricks readers dislike Spider Pig versus how many have a soul is important. To capture this crucial stat, I’m using a third-party counter image as a background for the card underneath the Spider Pig card. It is third-party, but hopefully, it will always work because the website looks like it has survived since the dawn of the internet. I shouldn’t complain because the price is right. I chose the least 1990s-looking counter and used it like this: @container style(--scroll-trigger-1: 1) { .result { background-image: url('https://counter6.optistats.ovh/private/freecounterstat.php?c=qbgw71kxx1stgsf5shmwrb2aflk5wecz'); background-repeat: no-repeat; background-attachment: fixed; background-position: center; } .counter-description::after { content: 'who like spider pig'; } .scroll-trigger-2 { display: none; } } @container style(--scroll-trigger-2: 1) { .result { background-image: url('https://counter6.optistats.ovh/private/freecounterstat.php?c=abtwsn99snah6wq42nhnsmbp6pxbrwtj'); background-repeat: no-repeat; background-attachment: fixed; background-position: center; } .counter-description::after { content: 'who dislike spider pig'; } .scroll-trigger-1 { display: none; } } Scrolls of wisdom: Lessons learned This hack turned out more complex than I expected, mostly because of the complexity of using scroll-triggered animations that only run when you meet an element by scrolling backward which goes against assumptions made by the current API. That’s a good thing to know and understand. Still, it’s amazing how much power is hidden in the current spec. We can style things based on extremely specific scrolling behaviors if we believe in ourselves. The current API had to be hacked to unlock that power, but I wish we could do something like: [class^="scroll-trigger-"] { view-timeline-axis: x; view-timeline-direction: backwards; /* <-- this is speculative. do not use! */ } With an API like that allowing the swipe-right scroll trigger to behave the way I originally imagined, the Spider Pig swiper would not require hacking. I dream of wider browser support for scroll-driven animations. But I hope to see the spec evolve to give us more flexibility to encourage designers to build nonlinear storytelling into the experiences they create. If not, once animation timelines land in more browsers, it might be time to make Web-Slinger.css more complete and production-ready, to make the more advanced scrolling use cases accessible to the average CSS user. Web-Slinger.css: Across the Swiper-Verse originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
LHB Linux Digest #25.01: Free Linux Course, ListMonk, File Descriptors and More
by: Abhishek Prakash Wed, 15 Jan 2025 18:28:50 +0530 This is the first newsletter of the year 2025. I hope expanding your Linux knowledge is one of your New Year's resolution, too. I am looking to learn and use Ansible in homelab setup. What's yours? The focus of Linux Handbook in 2025 will be on self-hosting. You'll see more tutorials and articles on open source software you can self host on your cloud server or your home lab. Of course, we'll continue to create new content on Kubernetes, Terraform, Ansible and other DevOps tools. Here are the other highlights of this edition of LHB Linux Digest: Extraterm terminalFile descriptorsSelf hosting mailing list managerAnd more tools, tips and memes for youThis edition of LHB Linux Digest newsletter is supported by PikaPods.❇️Self-hosting without hasslePikaPods allows you to quickly deploy your favorite open source software. All future updates are handled automatically by PikaPods while you enjoy using the software. I use it to self host Umami analytics. Oh! You get $5 free credit, so try it out and see if you could rely on PikaPods. PikaPods - Instant Open Source App HostingRun the finest Open Source web apps from $1/month, fully managed, no tracking, no ads, full privacy. Self-hosting was never this convenient.Instant Open Source App Hosting This post is for subscribers only Subscribe now Already have an account? Sign in
-
Don't Believe These Myths About Dual Booting Linux and Windows
by: Ankush Das
-
A Few Ways That Cloudways Makes Running This Site a Little Easier
by: Geoff Graham Tue, 14 Jan 2025 14:49:10 +0000 (This is a sponsored post.) It’s probably no surprise to you that CSS-Tricks is (proudly) hosted on Cloudways. DigitalOcean bought us back in 2021 then turned right around around and did the same with Cloudways shortly after. It was just a matter of time before we’d come together this way. And here we are! We were previously hosted on Flywheel which was a fairly boutique WordPress hosting provider until WP Engine purchased it years back. And, to be very honest and up-front, Flywheel served us extremely well. There reached a point when it became pretty clear that CSS-Tricks was simply too big for Flywheel to scale along. That might’ve led us to try out WP Engine in the absence of Cloudways… but it’s probably good that never came to fruition considering recent events. Anyway, moving hosts always means at least a smidge of contest-switching. Different server names with different configurations with different user accounts with different controls. We’re a pretty low-maintenance operation around here, so being on a fully managed host is a benefit because I see very little of the day-to-day nuance that happens on our server. The Cloudways team took care of all the heavy lifting of migrating us and making sure we were set up with everything we needed, from SFTP accounts and database access to a staging environment and deployment points. Our development flow used to go something like this: Fire up Local (Flywheel’s local development app) Futz around with local development Push to main Let a CI/CD pipeline publish the changes I know, ridiculously simple. But it was also riddled with errors because we didn’t always want to publish changes on push. There was a real human margin of error in there, especially when handling WordPress updates. We could have (and should have) had some sort of staging environment rather than blindly trusting what was working locally. But again, we’re kinduva a ragtag team despite the big corporate backing. The flow now looks like this: Fire up Local (we still use it!) Futz around with local development Push to main Publish to staging Publish to production This is something we could have set up in Flywheel but was trivial with Cloudways. I gave up some automation for quality assurance’s sake. Switching environments in Cloudways is a single click and I like a little manual friction to feel like I have some control in the process. That might not scale well for large teams on an enterprise project, but that’s not really what Cloudways is all about — that’s why we have DigitalOcean! See that baseline-status-widget branch in the dropdown? That’s a little feature I’m playing with (and will post about later). I like that GitHub is integrated directly into the Cloudways UI so I can experiment with it in whatever environment I want, even before merging it with either the staging or master branches. It makes testing a whole lot easier and way less error-prone than triggering auto-deployments in every which way. Here’s another nicety: I get a good snapshot of the differences between my environments through Cloudways monitoring. For example, I was attempting to update our copy of the Gravity Forms plugin just this morning. It worked locally but triggered a fatal in staging. I went in and tried to sniff out what was up with the staging environment, so I headed to the Vulnerability Scanner and saw that staging was running an older version of WordPress compared to what was running locally and in production. (We don’t version control WordPress core, so that was an easy miss.) I hypothesized that the newer version of Gravity Forms had a conflict with the older version of WordPress, and this made it ridiculously easy to test my assertion. Turns out that was correct and I was confident that pushing to production was safe and sound — which it was. That little incident inspired me to share a little about what I’ve liked about Cloudways so far. You’ll notice that we don’t push our products too hard around here. Anytime you experience something delightful — whatever it is — is a good time to blog about it and this was clearly one of those times. I’d be remiss if I didn’t mention that Cloudways is ideal for any size or type of WordPress site. It’s one of the few hosts that will let you BOYO cloud, so to speak, where you can hold your work on a cloud server (like a DigitalOcean droplet, for instance) and let Cloudways manage the hosting, giving you all the freedom to scale when needed on top of the benefits of having a managed host. So, if you need a fully managed, autoscaling hosting solution for WordPress like we do here at CSS-Tricks, Cloudways has you covered. A Few Ways That Cloudways Makes Running This Site a Little Easier originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
I Feel Like a Hacker Using These Cool Linux Terminal Tools
by: Sreenath
-
How to Wait for the sibling-count() and sibling-index() Functions
by: Juan Diego Rodríguez Mon, 13 Jan 2025 15:08:01 +0000 New features don’t just pop up in CSS (but I wish they did). Rather, they go through an extensive process of discussions and considerations, defining, writing, prototyping, testing, shipping handling support, and many more verbs that I can’t even begin to imagine. That process is long, and despite how much I want to get my hands on a new feature, as an everyday developer, I can only wait. I can, however, control how I wait: do I avoid all possible interfaces or demos that are possible with that one feature? Or do I push the boundaries of CSS and try to do them anyway? As ambitious and curious developers, many of us choose the latter option. CSS would grow stagnant without that mentality. That’s why, today, I want to look at two upcoming functions: sibling-count() and sibling-index(). We’re waiting for them — and have been for several years — so I’m letting my natural curiosity get the best of me so I can get a feel for what to be excited about. Join me! The tree-counting functionsAt some point, you’ve probably wanted to know the position of an element amongst its siblings or how many children an element has to calculate something in CSS, maybe for some staggering animation in which each element has a longer delay, or perhaps for changing an element’s background-color depending on its number of siblings. This has been a long-awaited deal on my CSS wishlists. Take this CSSWG GitHub Issue from 2017: However, counters work using strings, rendering them useless inside a calc() function that deals with numbers. We need a set of similar functions that return as integers the index of an element and the count of siblings. This doesn’t seem too much to ask. We can currently query an element by its tree position using the :nth-child() pseudo-selector (and its variants), not to mention query an element based on how many items it has using the :has() pseudo-selector. Luckily, this year the CSSWG approved implementing the sibling-count() and sibling-index() functions! And we already have something in the spec written down: How much time do we have to wait to use them? Earlier this year Adam Argyle said that “a Chromium engineer mentioned wanting to do it, but we don’t have a flag to try it out with yet. I’ll share when we do!” So, while I am hopeful to get more news in 2025, we probably won’t see them shipped soon. In the meantime, let’s get to what we can do right now! Rubbing two sticks togetherThe closest we can get to tree counting functions in terms of syntax and usage is with custom properties. However, the biggest problem is populating them with the correct index and count. The simplest and longest method is hardcoding each using only CSS: we can use the nth-child() selector to give each element its corresponding index: li:nth-child(1) { --sibling-index: 1; } li:nth-child(2) { --sibling-index: 2; } li:nth-child(3) { --sibling-index: 3; } /* and so on... */Setting the sibling-count() equivalent has a bit more nuance since we will need to use quantity queries with the :has() selector. A quantity query has the following syntax: .container:has(> :last-child:nth-child(m)) { }…where m is the number of elements we want to target. It works by checking if the last element of a container is also the nth element we are targeting; thus it has only that number of elements. You can create your custom quantity queries using this tool by Temani Afif. In this case, our quantity queries would look like the following: ol:has(> :nth-child(1)) { --sibling-count: 1; } ol:has(> :last-child:nth-child(2)) { --sibling-count: 2; } ol:has(> :last-child:nth-child(3)) { --sibling-count: 3; } /* and so on... */This example is intentionally light on the number of elements for brevity, but as the list grows it will become unmanageable. Maybe we could use a preprocessor like Sass to write them for us, but we want to focus on a vanilla CSS solution here. For example, the following demo can support up to 12 elements, and you can already see how ugly it gets in the code. That’s 24 rules to know the index and count of 12 elements for those of you keeping score. It surely feels like we could get that number down to something more manageable, but if we hardcode each index we are bound increase the amount of code we write. The best we can do is rewrite our CSS so we can nest the --sibling-index and --sibling-count properties together. Instead of writing each property by itself: li:nth-child(2) { --sibling-index: 2; } ol:has(> :last-child:nth-child(2)) { --sibling-count: 2; }We could instead nest the --sibling-count rule inside the --sibling-index rule. li:nth-child(2) { --sibling-index: 2; ol:has(> &:last-child) { --sibling-count: 2; } }While it may seem wacky to nest a parent inside its children, the following CSS code is completely valid; we are selecting the second li element, and inside, we are selecting an ol element if its second li element is also the last, so the list only has two elements. Which syntax is easier to manage? It’s up to you. But that’s just a slight improvement. If we had, say, 100 elements we would still need to hardcode the --sibling-index and --sibling-count properties 100 times. Luckily, the following method will increase rules in a logarithmic way, specifically base-2. So instead of writing 100 rules for 100 elements, we will be writing closer to 10 rules for around 100 elements. Flint and steelThis method was first described by Roman Komarov in October last year, in which he prototypes both tree counting functions and the future random() function. It’s an amazing post, so I strongly encourage you to read it. This method also uses custom properties, but instead of hardcoding each one, we will be using two custom properties that will build up the --sibling-index property for each element. Just to be consistent with Roman’s post, we will call them --si1 and --si2, both starting at 0: li { --si1: 0; --si2: 0; }The real --sibling-index will be constructed using both properties and a factor (F) that represents an integer greater or equal to 2 that tells us how many elements we can select according to the formula sqrt(F) - 1. So… For a factor of 2, we can select 3 elements. For a factor of 3, we can select 8 elements. For a factor of 5, we can select 24 elements. For a factor of 10, we can select 99 elements. For a factor of 25, we can select 624 elements. As you can see, increasing the factor by one will give us exponential gains on how many elements we can select. But how does all this translate to CSS? The first thing to know is that the formula for calculating the --sibling-index property is calc(F * var(--si2) + var(--si1)). If we take a factor of 3, it would look like the following: li { --si1: 0; --si2: 0; /* factor of 3; it's a harcoded number */ --sibling-index: calc(3 * var(--si2) + var(--si1)); }The following selectors may be random but stay with me here. For the --si1 property, we will write rules selecting elements that are multiples of the factor and offset them by one 1 until we reach F - 1, then set --si1 to the offset. This translates to the following CSS: li:nth-child(Fn + 1) { --si1: 1; } li:nth-child(Fn + 2) { --si1: 2; } /* ... */ li:nth-child(Fn+(F-1)) { --si1: (F-1) }So if our factor is 3, we will write the following rules until we reach F-1, so 2 rules: li:nth-child(3n + 1) { --si1: 1; } li:nth-child(3n + 2) { --si1: 2; }For the --si2 property, we will write rules selecting elements in batches of the factor (so if our factor is 3, we will select 3 elements per rule), going from the last possible index (in this case 8) backward until we simply are unable to select more elements in batches. This is a little more convoluted to write in CSS: li:nth-child(n + F*1):nth-child(-n + F*1-1){--si2: 1;} li:nth-child(n + F*2):nth-child(-n + F*2-1){--si2: 2;} /* ... */ li:nth-child(n+(F*(F-1))):nth-child(-n+(F*F-1)) { --si2: (F-1) }Again, if our factor is 3, we will write the following two rules: li:nth-child(n + 3):nth-child(-n + 5) { --si2: 1; } li:nth-child(n + 6):nth-child(-n + 8) { --si2: 2; }And that’s it! By only setting those two values for --si1 and --si2 we can count up to 8 total elements. The math behind how it works seems wacky at first, but once you visually get it, it all clicks. I made this interactive demo in which you can see how all elements can be reached using this formula. Hover over the code snippets to see which elements can be selected, and click on each snippet to combine them into a possible index. If you crank the elements and factor to the max, you can see that we can select 49 elements using only 14 snippets! Wait, one thing is missing: the sibling-count() function. Luckily, we will be reusing all we have learned from prototyping --sibling-index. We will start with two custom properties: --sc1 and --sc1 at the container, both starting at 0 as well. The formula for calculating --sibling-count is the same. ol { --sc1: 0; --sc2: 0; /* factor of 3; also a harcoded number */ --sibling-count: calc(3 * var(--sc2) + var(--sc1)); }Roman’s post also explains how to write selectors for the --sibling-count property by themselves, but we will use the :has() selection method from our first technique so we don’t have to write extra selectors. We can cram those --sc1 and --sc2 properties into the rules where we defined the sibling-index() properties: /* --si1 and --sc1 */ li:nth-child(3n + 1) { --si1: 1; ol:has(> &:last-child) { --sc1: 1; } } li:nth-child(3n + 2) { --si1: 2; ol:has(> &:last-child) { --sc1: 2; } } /* --si2 and --sc2 */ li:nth-child(n + 3):nth-child(-n + 5) { --si2: 1; ol:has(> &:last-child) { --sc2: 1; } } li:nth-child(n + 6):nth-child(-n + 8) { --si2: 2; ol:has(> &:last-child) { --sc2: 2; } }This is using a factor of 3, so we can count up to eight elements with only four rules. The following example has a factor of 7, so we can count up to 48 elements with only 14 rules. This method is great, but may not be the best fit for everyone due to the almost magical way of how it works, or simply because you don’t find it aesthetically pleasing. While for avid hands lighting a fire with flint and steel is a breeze, many won’t get their fire started. Using a flamethrowerFor this method, we will use once again custom properties to mimic the tree counting functions, and what’s best, we will write less than 20 lines of code to count up to infinity—or I guess to 1.7976931348623157e+308, which is the double precision floating point limit! We will be using the Mutation Observer API, so of course it takes JavaScript. I know that’s like admitting defeat for many, but I disagree. If the JavaScript method is simpler (which it is, by far, in this case), then it’s the most appropriate choice. Just as a side note, if performance is your main worry, stick to hard-coding each index in CSS or HTML. First, we will grab our container from the DOM: const elements = document.querySelector("ol");Then we’ll create a function that sets the --sibling-index property in each element and the --sibling-count in the container (it will be available to its children due to the cascade). For the --sibling-index, we have to loop through the elements.children, and we can get the --sibling-count from elements.children.length. const updateCustomProperties = () => { let index = 1; for (element of elements.children) { element.style.setProperty("--sibling-index", index); index++; } elements.style.setProperty("--sibling-count", elements.children.length); };Once we have our function, remember to call it once so we have our initial tree counting properties: updateCustomProperties();Lastly, the Mutation Observer. We need to initiate a new observer using the MutationObserver constructor. It takes a callback that gets invoked each time the elements change, so we write our updateCustomProperties function. With the resulting observer object, we can call its observe() method which takes two parameters: the element we want to observe, and a config object that defines what we want to observe through three boolean properties: attributes, childList, and subtree. In this case, we just want to check for changes in the child list, so we set that one to true: const observer = new MutationObserver(updateCustomProperties); const config = {attributes: false, childList: true, subtree: false}; observer.observe(elements, config);That would be all we need! Using this method we can count many elements, in the following demo I set the max to 100, but it can easily reach tenfold: So yeah, that’s our flamethrower right there. It definitely gets the fire started, but it’s plenty overkill for the vast majority of use cases. But that’s what we have while we wait for the perfect lighter. More information and tutorialsPossible Future CSS: Tree-Counting Functions and Random Values (Roman Komarov) View Transitions Staggering (Chris Coyier) Element Indexes (Chris Coyier) Related IssuesEnable the use of counter() inside calc() #1026 Proposal: add sibling-count() and sibling-index() #4559 Extend sibling-index() and sibling-count() with a selector argument #9572 Proposal: children-count() function #11068 Proposal: descendant-count() function #11069 How to Wait for the sibling-count() and sibling-index() Functions originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
RepublicLabs
by: aiparabellum.com Mon, 13 Jan 2025 05:36:03 +0000 https://republiclabs.ai/gen-ai-tools RepublicLabs.ai is a cutting-edge platform designed to revolutionize the way we create visual content. By leveraging advanced AI generative models, this tool allows users to create stunning images and videos effortlessly. Whether you’re looking to generate professional headshots, artistic visuals, or even fantasy animations, RepublicLabs.ai offers a wide range of tools to cater to your creative needs. It empowers individuals, professionals, and businesses to bring their ideas to life without requiring complex technical skills. Features of RepublicLabs.ai RepublicLabs.ai boasts an extensive suite of features tailored to meet diverse creative demands. Here are some of its standout features: AI Face Generator: Create realistic human faces with ease. AI Art Generator: Craft artistic visuals and paintings with AI assistance. Cartoon AI Generator: Turn photos into cartoon-style images. Fantasy AI: Design imaginative and surreal visuals effortlessly. Unrestricted AI Image Generator: Generate any image without limitations. Professional Headshot Generator: Create high-quality headshots for professional use. AI LinkedIn Photo Generator: Perfect LinkedIn profile pictures created by AI. Ecommerce Photography Tool: Generate product images optimized for online stores. Pyramid Flow Video Generator: Produce visually appealing videos with AI technology. Anime Image Generator: Create anime-style images with ease. Text-to-Art Generator: Transform text into stunning artwork. AI Product Advertisement Generator: Create compelling product ads for business needs. Deep AI Image Generator: Produce high-quality, AI-driven images. Minimax AI Video Generator: Generate videos with minimal effort. Uncensored and Unfiltered AI Generators: Produce unrestricted creative content. How RepublicLabs.ai Works Creating images and videos with RepublicLabs.ai is simple and user-friendly. Here’s how it works: Choose a Tool: Select the desired AI tool from the variety of options available on the platform. Input Your Ideas: Provide a prompt, text, or upload an image to guide the AI in generating content. Customize Outputs: Adjust styles, colors, and other parameters to personalize the results. Preview and Download: Review the generated content and download it for use. The platform is designed with a seamless workflow, ensuring efficiency and quality in every output. Benefits of RepublicLabs.ai RepublicLabs.ai offers numerous advantages that make it an invaluable tool for creators: Ease of Use: No technical expertise required; the platform is beginner-friendly. Versatility: Supports a wide range of creative needs, from professional to personal projects. Time-Saving: Generates high-quality visuals and videos in just a few minutes. Cost-Effective: Eliminates the need for expensive photography or design services. Unrestricted Creativity: Enables users to explore limitless possibilities without boundaries. Professional Results: Produces content that meets high-quality standards suitable for business use. Pricing of RepublicLabs.ai RepublicLabs.ai offers flexible pricing plans to cater to various user needs. Users can explore free tools like the AI Headshot Generator and other trial options. For advanced features and unrestricted access, premium plans are available. Pricing details can be found on the platform to suit individual and organizational budgets. Review of RepublicLabs.ai RepublicLabs.ai has garnered positive reviews from users across different industries. Creators appreciate its user-friendly interface, diverse features, and the quality of its outputs. Professionals have highlighted its efficiency in generating marketing materials, while artists commend its ability to bring imaginative concepts to life. The platform is widely regarded as a game-changer in the field of AI-driven content creation. Conclusion RepublicLabs.ai is a versatile and powerful platform that bridges the gap between creativity and technology. With its vast array of AI tools, it empowers users to transform their ideas into captivating images and videos effortlessly. Whether you’re an artist, a marketer, or someone looking to enhance their personal portfolio, RepublicLabs.ai provides the tools you need to succeed. Explore the endless possibilities and let your creativity shine with this innovative AI-powered platform. Visit Website The post RepublicLabs appeared first on AI Parabellum.
-
ArmSoM AIM7: A Promising Upcoming Rockchip Device for AI Development
by: Abhishek Kumar When ArmSoM kindly offered to send me their upcoming RK3588 AI Module 7 (AIM7), along with the AIM-IO carrier board, I was thrilled. Having worked with AI hardware like Nvidia’s Jetson Nano and Raspberry Pi boards, I’m always curious about devices that promise powerful AI capabilities without requiring a large physical setup or heavy power draw. The RK3588 AI Module 7 (AIM7), powered by the Rockchip RK3588, seemed to hit that sweet spot, a compact module with robust processing power, efficient energy use, and versatile connectivity options for a range of projects. What intrigued me most was its potential to handle AI tasks like object detection and image processing while also supporting multimedia applications, all while being small enough to integrate into custom enclosures or embedded systems where space is a premium. Here’s my hands-on experience with this exciting piece of hardware. 📋 RK3588 AI Module 7 is an upcoming product in the crowdfunding pre-launch phase. My experience is with a product in early stages and the product will improve with the feedback provided by me and other reviewers. ArmSoM RK3588 AI Module 7 AIM7 specifications The RK3588 AI Module 7 is a compact yet powerful board built around the Rockchip RK3588 SoC, an octa-core processor with a quad-core Cortex-A76 and a quad-core Cortex-A55, clocked up to 2.4 GHz. Complementing this powerhouse is the ARM Mali-G610 MP4 GPU with a 6 TOPS NPU, making it an excellent choice for AI workloads and multimedia applications. Its small size and versatile connectivity options make it suitable for embedded applications and development projects. The unit I received came with 8 GB of LPDDR4x RAM and 32 GB of eMMC storage. Feature ArmSoM RK3588(Rockchip) CPU Cores Quad-core ARM Cortex-A76 + Quad-core ARM Cortex-A55 GPU Cores ARM Mali-G610 MP4 Memory 8 GB/32 GB LPDDR4x, 2112 MHz Storage microSD card, 32GB eMMC 5.1 flash storage Video Encoding 8K@30 fps H.265 / H.264 Video Decoding 8K@60 fps H.265/VP9/AVS2, 8K@30 H.264 AVC/MVC USB Ports 1x USB 3.0, 3x USB 2.0 Ethernet 1x 10/100/1000 BASE-T CSI Interfaces 12 channels (4x2) MIPI CSI-2 D-PHY1.1 (18 Gbps) I/O 3 UARTs, 2 SPIs, 2 I2S, 4 I2Cs, multiple GPIOs PCIe 1x 1/2/4 lane PCIe 3.0 & 1x 1 lane PCIe 2.0 HDMI Output 1x HDMI 2.1 / 1x eDP 1.4 DP Interface 1x DP 1.4a eDP/DP Interface 1x eDP 1.4 / 1x HDMI 2.1 out DSI Interface 1x DSI (1x2) 2 sync OS Support Debian, Ubuntu, Armbian AIM-IO Carrier Board Specifications The AIM-IO carrier board is designed to complement the RK3588 AI Module 7. It offers a rich set of features, including multiple USB ports, display outputs, and expansion options, making it an ideal platform for development and prototyping. Feature Specification USB Ports 4x USB 3.0 Type-A Display 1x DisplayPort, 1x HDMI-out Networking Gigabit Ethernet GPIO 40-pin expansion header Power Connectors DC Barrel jack for 5V input, PoE support Expansion M.2 (E-key, PCIe/USB/SDIO/UART), microSD MIPI DSI 1x 4 lanes MIPI DSI up to 4K@60 fps MIPI CSI0/1 2x 2 lanes MIPI CSI, Max 2.5Gbps per lane MIPI CSI2/3 1x 4 lanes MIPI CSI, Max 2.5Gbps per lane Firmware Flashing and device mode via USB Type-C Dimensions 100 x 80 x 29 mm Unboxing and first impressionsThe RK3588 AI Module 7 arrived in a compact, well-packaged generic box alongside the AIM-IO board, which is essential for getting the module up and running. At first glance, the AIM7 itself is tiny, measuring just 69.6 x 45 mm—almost identical in size to the Jetson Nano’s core module. I added the heatsink on my own The carrier board, too, shares the same dimensions as the Jetson Nano Developer Kit’s carrier board, making it an easy swap for those already familiar with Nvidia’s ecosystem. The build quality of both the module and the carrier board is solid. The AIM-IO board’s layout is clean, with clearly labeled ports and connectors. It features four USB 3.0 ports, HDMI and DisplayPort outputs, a 40-pin GPIO header and an M.2 slot for expansion, a welcome addition for developers looking to push the hardware’s limits. Setting it upInstalling the RK3588 AI Module 7 onto the AIM-IO board was straightforward. The edge connector design, similar to the Jetson Nano’s, meant it slotted in effortlessly. Powering it up required a standard 5V barrel jack. I know these Rockchip SBCs get real hot, so I got a generic passive heat sink. Active cooling options were way too expensive. Since I was hoping to use this device for home automation projects, I also got myself a DIY-built case. Don’t judge me, I’m moving out, so I haven’t even peeled the protective plastic off of acrylic yet (to protect from scratches)! OS installation📋 ArmSoM devices come with a Debian installed on eMMC but in Chinese. I decided to install a distro of my choice by replacing the default OS. Now, let’s talk about the OS installation. Spoiled by the ease of the Raspberry Pi Imager, I found myself on a steep learning curve while working with RKDevTool. Burning an image for the Rockchip device required me to watch several videos and read multiple pieces of documentation. After much trial and error, I managed to flash the provided Ubuntu image successfully. I’ve written a dedicated guide to help you install an OS on Rockchip devices using RKDevTool. One hiccup worth mentioning: I couldn’t test the SD card support as it didn’t work for me at all. This was disappointing, but the onboard eMMC storage provided a reliable fallback. Performance testingTo gauge the RK3588 AI Module 7’s capabilities, I ran a series of benchmarks and real-world tests. Here’s how it fared: 📋 For general testing, I opted for the Armbian image, which worked well, though I couldn’t test the AI capabilities of the NPU on it. To explore those, I later switched to the Ubuntu image. Geekbench ScoresHere you can see the single-core and multi-core performance of RK3588, which is quite impressive. I mean, the results speaks for themselves. The Cortex-A76 cores are a significant upgrade. You can see the full single-core performance of RK3588: Multi-core performance: The RK3588’s multi-core performance blew the Raspberry Pi and even Jetson Nano out of the water, with scores nearly double in most tests. Source: ArmSoM AI WorkloadsThe RK3588 AI Module 7’s 6 TOPS NPU is designed to handle AI inference efficiently. It supports RKNN-LLM, a toolkit that enables deploying lightweight language models on Rockchip hardware. I tested the TinyLLAMA model with 1.1 billion parameters, and the performance was amazing, achieving 16 tokens per second. Output result: root@armsom-aim7-io:/# ./llm_demo tinyLlama.rkllm rkllm init start rkllm-runtime version: 1.0.1, rknpu driver version: 0.9.6, platform: RK3588 rkllm init success **********************可输入以下问题对应序号获取回答/或自定义输入******************** [0] what is a hypervisor? ************************************************************************* user: 0 what is a hypervisor? robot: A hypervisor is software, firmware, or hardware that creates and runs virtual machines (VMs).There are two types: Type 1 (bare-metal, runs directly on hardware) and Type 2 (hosted, runs on top of an OS). tokens 50 time 3.12 Token/s : 16.01While I couldn’t test all the other supported models, here’s a list of models and their performance, courtesy of Radxa:TinyLLAMA 1.1B – 15.03 tokens/sQwen 1.8B – 14.18 tokens/sPhi3 3.8B – 6.46 tokens/sChatGLM3 – 3.67 tokens/s The RKNN-LLM toolkit supports deploying lightweight language models on Rockchip hardware, and the NPU’s efficiency makes it a compelling option for AI workloads. The performance varies depending on the model size and parameters, with larger models naturally running slower. The NPU also consumes less power than the GPU, freeing it up for other tasks. Image & video processingI couldn’t process live video and images as I didn’t have a compatible camera module. I own an RPi camera module but lacked the compatible ribbon cable to connect it to the AIM-IO board. Despite this, I tested the image processing capabilities using the YOLOv8 model for Object detection on the demo images provided with it. Took me a lot of time to understand how to use it (will cover that in separate article, hopefully) but thanks to Radxa's well-structured documentation, which provided a step-by-step guide. The results were impressive, showcasing the board’s ability to handle complex image recognition tasks efficiently. What Could It Be Used For?The RK3588 AI Module 7 (AIM7) offers a wide range of potential applications, making it a versatile tool for developers and hobbyists alike. Here are some possible use cases: Home Automation: AIM7’s low power consumption and robust processing capabilities make it ideal for smart home setups. From controlling IoT devices to running edge AI for home security systems, the AIM7 can handle it all. AI-Powered Applications: With its 6 TOPS NPU, the AIM7 excels in tasks like object detection, natural language processing, and image recognition. It’s a great choice for deploying lightweight AI models at the edge. Media Centers: The ability to decode and encode 8K video makes it a powerful option for creating custom media centers or streaming setups. Robotics: AIM7’s compact size and versatile connectivity options make it suitable for robotics projects that require real-time processing and AI inference. Educational Projects: For students and educators, the AIM7 provides a hands-on platform to learn about embedded systems, AI, and computer vision. Industrial Automation: Its robust hardware and software support make it a reliable choice for industrial applications like predictive maintenance and process automation. DIY Projects: Whether you’re building a smart mirror, an AI-powered camera, or a custom NAS, the AIM7 offers the flexibility and power to bring your ideas to life. If you are not interested in all of the above, you can always use it as your secondary desktop, at the end it is essentially a single board computer. 😉 Final thoughtsAfter spending some time with the RK3588 AI Module 7, I can confidently say that it’s an impressive piece of hardware. I installed Ubuntu on it, and the desktop experience was surprisingly smooth. The onboard eMMC storage really made the experience smooth, it made app launches fast and responsive, offering a noticeable speed boost compared to traditional SD card setups. Watching YouTube at 1080p was smooth, something that’s still a bit of a challenge for Raspberry Pi in the same resolution. The playback was consistent, without any stuttering, which is a big win for media-heavy applications. The RKNN-LLM toolkit enabled me to deploy lightweight models, and the NPU’s power efficiency freed up the GPU for other tasks, which is perfect for edge AI applications. My only gripe is the lack of extensive documentation from ArmSoM. While it’s available, it often doesn’t cover everything, and I found myself relying on Radxa and Mixtile forums to work around issues. ArmSoM told me that documentation will be improved after the crowdfunding launch. You can follow the crowdfunding campaign and other developments on the dedicate page. RK3588 AI Module7 A low-power AI module compatible with the Nvidia Jetson Nano ecosystem Crowd Supply I’m looking forward to exploring more of its potential in my home automation projects, especially as I integrate AI for smarter, more efficient systems.
-
Remembering Usenet - The OG Social Network that Existed Even Before the World Wide Web
by: Bill Dyer Before Reddit, before GitHub, and even before the World Wide Web went online, there was Usenet. This decentralized network of discussion groups was a main line of communication of the early internet - ideas were exchanged, debates raged, research conducted, and friendships formed. For those of us who experienced it, Usenet was more than just a communication tool; it was a community, a center of innovation, and a proving ground for the ideas. 📋 It is also culturally and historically significant in that it popularized concepts and terms such as "LOL (first used in a newsgroup in 1990)," FAQ," "flame," "spam," and,"sockpuppet". Oldest network that is still in useUsenet is one of the oldest computer network communications systems still in widespread use. It went online in 1980, at the University of North Carolina at Chapel Hill and Duke University. 1980 is significant in that it was pre-World Wide Web by over a decade. In fact, the "Internet" was basically a network of privately owned ARPANet sites. Usenet was created to be the network for the general public - before the general public even had access to the Internet. I see Usenet as the first social network. Over time, Usenet grew to include thousands of discussion groups (called newsgroups) and millions of users. Users read and write posts, called articles, using software called a newsreader. In the 1990s, early Web browsers and email programs often had a built-in newsreader. Topics were many; if you could imagine a topic, there was probably a group made for it and, if a group didn't exist, one could be made. The culture of Usenet: Learning the ropesWhile I say that Usenet was the first social network, it was never really organized to be one. Each group owner could - and usually did - set their own rules. Before participating in discussions, it was common advice to “lurk” for a while - read the group without posting - to learn the rules, norms, and tone of the community. Every Usenet group had its own etiquette, usually unwritten, and failing to follow it could lead to a “flaming.” These public scoldings, were often harsh, but they reinforced the importance of respecting the group’s culture. For groups like comp.std.doc and comp.text, lurking was essential to understand the technical depth and specificity of the conversations. Jumping in without preparation wasn’t just risky - it was almost a rite of passage to survive the initial corrections from seasoned members. Yet, once you earned their respect, you became part of a tightly knit network of expertise and camaraderie - you belonged. Usenet and the birth of LinuxOne newsgroup, comp.os.minix, became legendary when Linus Torvalds posted about a new project he was working on. In August 1991, Linus announced the creation of Linux, a hobby project of a free operating system. Usenet's structure - decentralized, threaded, and open - can be seen as the first demonstration of the values of open-source development. Anyone with a connection and a bit of technical know-how could hop on and join in a conversation. For developers, Usenet quickly became the main tool for keeping up with rapidly evolving programming languages, paradigms, and methodologies. It's not a stretch to see how Usenet also became an essential platform for code collaborating, bug tracking, and intellectual exchange - it thrived in this ecosystem. The discussions were sometimes messy - flame wars and off-topic posts were common - but they were also authentic. Problems were solved not in isolation but through collective effort. Without Usenet, the early growth of Linux may well have been much slower. A personal memory: Helping across continentsMy own experience with Usenet wasn’t just about reading discussions or solving technical problems. It became a bridge to collaboration and friendship. I remember one particular interaction well: a Finnish academic working on her doctoral dissertation on documentation standards posted a query to a group I frequented. By chance, I had the information she needed. At the time, I spent a lot of my time in groups like comp.std.doc and comp.text, where discussions about documentation standards and text processing were common. She was working with SGML standards, while I was more focused on HTML. Despite our different areas of expertise, Usenet made it easy for the two of us to connect and share knowledge. She later wrote back to say my input had helped her complete her dissertation. This took place in the mid-1990s and that brief collaboration turned into a friendship that lasts to this day. Although we may go long periods without writing, we’ve always kept in touch. It’s evidence to how Usenet didn’t just encourage innovation but also created a lasting friendship across continents. The decline and legacy of UsenetAs the internet evolved, Usenet's use has been fading. The rise of web-based forums, social media, and version-control platforms like GitHub made Usenet feel clunky and outdated, and there are concerns that it is largely being used to send spam and conduct flame wars and binary (no text) exchanges. On February 22, 2024, Google stopped Usenet support for these reasons. Users can no longer post or subscribe, or view new content. Historical content, before the cut-off date can be viewed, however. This doesn't mean that Usenet is dead; far from it. Giganews, Newsdemon, and Usenet are still running, if you are interested in looking into this. Both require a subscription, but Eternal September provides free access. If Usenet's use has been declining, then why look into it? Its archives. The archives hold detailed discussions, insights, questions and answers, and snippets of code - a good deal of which is still relevant to today’s software hurdles. ConclusionI would guess that, for those of us who were there, Usenet remains a nostalgic memory. It does for me. The quirks of its culture - from FAQs to "RTFM" responses - were part of its charm. It was chaotic, imperfect, and sometimes frustrating, but it was also a place where real work got done and real connections were made. Looking back, my time on Usenet was one of the foundational chapters in my journey through technology. Helping a stranger across the globe complete a dissertation might seem like a small thing, but it’s emblematic of what Usenet stood for: collaboration without boundaries. It bears repeating: It was a place where knowledge was freely shared and where the seeds of ideas could grow into something great. And in this case, it helped create a friendship that continues to remind me of Usenet’s unique power to connect people. As Linux fans, we owe a lot to Usenet. Without it, Linux might have remained a small hobby project instead of becoming the force of computing that it has become. So, the next time you’re diving into a Linux distro or collaborating on an open-source project, take a moment to appreciate the platform that helped make it all possible.
-
QR Codes and Linux: Bridging Open-Source Technology with Seamless Connectivity
By: Janus Atienza Thu, 09 Jan 2025 17:34:55 +0000 QR codes have revolutionized how we share information, offering a fast and efficient way to connect physical and digital worlds. In the Linux ecosystem, the adaptability of QR codes aligns seamlessly with the open-source philosophy, enabling developers, administrators, and users to integrate QR code functionality into various workflows. Leveraging a qr code generator free can simplify this process, making it accessible even for those new to the technology. From system administration to enhancing user interfaces, using QR codes in Linux environments is both practical and innovative. QR Codes on Linux: Where and How They Are UsedQR codes serve diverse purposes in Linux systems, providing solutions that enhance functionality and user experience. For instance, Linux administrators can generate QR codes to link to system logs or troubleshooting guides, offering easy access during remote sessions. In secure file sharing, QR codes can embed links to files, enabling safe resource sharing without exposing the system to vulnerabilities. Additionally, Linux’s prevalence in IoT device management is complemented by QR codes, which simplify pairing and configuring devices. Teachers and learners attach QR codes to scripts, tutorials, or resources in education, ensuring quick access to valuable materials. These examples demonstrate how QR codes integrate seamlessly into Linux workflows to improve efficiency and usability. How to Generate QR Codes on LinuxLinux users have several methods to create QR codes, from terminal-based commands to online tools like me-qr.com, which offer user-friendly interfaces. Here’s a list of ways to generate QR codes within Linux environments: Automate QR code generation with cron jobs for time-sensitive data. Encode secure access tokens or one-time passwords in QR codes. Store Linux commands in QR codes for quick scanning and execution. Use QR codes for encrypted messages using tools. Create QR codes linking to installation scripts or system resources. In Linux environments, QR codes are not limited to traditional uses. For instance, remote server management becomes more secure with QR codes containing SSH keys or login credentials, allowing encrypted device connections. Similarly, QR codes can be used in disaster recovery processes to store encryption keys or recovery instructions. For Linux-based applications, developers embed QR codes into app interfaces to direct users to support pages or additional features, decluttering the UI. Additionally, collaborative workflows benefit from QR codes directly linking to Git repositories, enabling seamless project sharing among teams. These creative applications illustrate the versatility of QR codes in enhancing functionality and security within Linux systems. The Open-Source Potential of QR Codes on LinuxAs Linux continues to power diverse applications, from servers to IoT devices, QR codes add a layer of simplicity and connectivity. Whether you’re looking to generate QR code free for file sharing or embed codes into an application, Linux users have a wealth of options at their fingertips. Platforms like me-qr.com provide an intuitive and accessible way to create QR codes, while command-line tools offer flexibility for advanced users. With their ability to streamline workflows and enhance user experiences, QR codes are an indispensable asset in the Linux ecosystem. Let the power of open-source meet the versatility of QR codes, and watch your Linux environment transform into a hub of connectivity and innovation. The post QR Codes and Linux: Bridging Open-Source Technology with Seamless Connectivity appeared first on Unixmen.
-
Tight Mode: Why Browsers Produce Different Performance Results
by: Geoff Graham Thu, 09 Jan 2025 16:16:15 +0000 I wrote a post for Smashing Magazine that was published today about this thing that Chrome and Safari have called “Tight Mode” and how it impacts page performance. I’d never heard the term until DebugBear’s Matt Zeunert mentioned it in a passing conversation, but it’s a not-so-new deal and yet there’s precious little documentation about it anywhere. So, Matt shared a couple of resources with me and I used those to put some notes together that wound up becoming the article that was published. In short: The implications are huge, as it means resources are not treated equally at face value. And yet the way Chrome and Safari approach it is wildly different, meaning the implications are wildly different depending on which browser is being evaluated. Firefox doesn’t enforce it, so we’re effectively looking at three distinct flavors of how resources are fetched and rendered on the page. It’s no wonder web performance is a hard discipline when we have these moving targets. Sure, it’s great that we now have a consistent set of metrics for evaluating, diagnosing, and discussing performance in the form of Core Web Vitals — but those metrics will never be consistent from browser to browser when the way resources are accessed and prioritized varies. Tight Mode: Why Browsers Produce Different Performance Results originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
FOSS Weekly #25.02: Absolute Linux, ShredOS, AI in Kdenlive, Fossify File Manager and More
by: Abhishek Prakash The holidays are over and so do the Tuxmas Days. 12 days of 12 new features, changes and announcements. As mentioned on Tuxmas Day 11, It's FOSS Lifetime membership now also gets you lifetime Reader-level membership of Linux Handbook, our other portal focused on sysadmin, DevOps and self-hosting. If you are one of the 73 people (so far) who opted for the Lifetime plan, you'll get a separate email on Linux Handbook's membership. Meanwhile, please download the 'Linux for DevOps' book for free as part of your Plus membership. Please note that this combined benefit of free lifetime Linux Handbook Reader level membership (usually costs $18 per year) is only available till 11th January. Thereafter, it will cost $99 and won't include Linux Handbook's membership. Get the additional advantage before the time runs out. Get It's FOSS Lifetime Membership 💬 Let's see what else you get in this edition SteamOS rolling out Kdenlive working on a new AI-powered feature. Nobara being the first one to introduce a release in 2025. And other Linux news, videos and, of course, memes! 📰 Linux and Open Source NewsNobara 41 is the first Linux distro release of 2025. 2025 has hardly started and we have some bad news already as Absolute Linux is discontinued. Kdenlive is working on adding an AI-powered background removal tool. If rumors are to be believed, a 16 GB variant of the Raspberry Pi 5 could be coming very soon. Soon, you'll be able to install SteamOS on other handheld gaming devices. Did you know there is a dedicated Linux distribution for wiping disks? ShredOS is a Linux Distro Built to Wipe Your Data A Linux distro built to help you destroy data. Sounds pretty cool! It's FOSS NewsSourav Rudra 🧠 What We’re Thinking AboutSourav switched to Proton VPN after going through many other VPN services, here's what he thinks of it: I Switched to Proton VPN and Here’s What I Honestly Think About It Proton VPN is an impressive solution. Here’s my experience with it. It's FOSS NewsSourav Rudra 🧮 Linux Tips, Tutorials and MoreLearn how to install and use Neovim on various distros. Updating your Python packages is crucial, and it's effortless with pip. Optimize your workflow by auto-starting AppImages on your Linux workstation. And an analogy to explain why there are so many Linux distributions. What is Linux? Why There are 100’s of Linux OS? Cannot figure out what is Linux and why there are so many of Linux? This analogy explains things in a simpler manner. It's FOSSAbhishek Prakash 👷 Maker's and AI CornerDid you know you could run LLMs locally on a Raspberry Pi? How to Run LLMs Locally on Raspberry Pi Using Ollama AI Got a Raspberry Pi? How about using it ton run some LLMs using Ollama for your own private AI? It's FOSSAbhishek Kumar 📹 Videos we are watchingDo you really need a media server software? Subscribe to our YouTube channe ✨ Apps of the WeekMullvad Browser is a very solid privacy-focused alternative to the likes of Google Chrome. Mullvad Browser: A Super Privacy-Focused Browser Based on Firefox Mullvad is Firefox, but enhanced for privacy, pretty interesting take as a cross-platform private browser app. It's FOSS NewsSourav Rudra If you are looking for a change in file management on Android, then you could go for Fossify File Manager. 🧩 Quiz TimeHave some fun finding the logos of distros and open source projects. Spot All The Logos: Image Puzzle Let’s see if you can spot the hidden items in the image! It's FOSSAnkush Das 💡 Quick Handy TipIf you are a Xfce user with a multi-monitor setup, then you can span the Xfce panel across monitors. First, right-click on the panel you want to span and then go to Panel → Panel Preferences. Here, in the Display tab, set Output as Automatic and enable the “Span monitors” checkbox. 🤣 Meme of the WeekThe pain is real. 😥 🗓️ Tech TriviaHitachi announced the first 1 MB memory chip on January 6, 1984. At the time, this was a revolutionary leap in storage technology. Today, we carry more memory in our pockets than entire systems from that era! 🧑🤝🧑 FOSSverse CornerPro FOSSer Neville shares his experience with Chimera Linux on virt-manager. This is a detailed write-up, so be prepared for a lengthy read. Chimera Linux in virt-manager with Plasma | Wayland | APK | Clang/LLVM | musl | dinit | BSD core tools Chimera Linux in virt-manager with Plasma and Wayland Want to try something really different? Chimera Linux have just released new images as of 4/12/24. Chimera is a new distro build from scratch using Linux kernel 6.12 core tools from FreeBSD apk package system from Alpine Clang/LLVM toolchain Musl C library Gnome and Plasma desktops with Wayland &/or X11 multiple architectures - Intel/AMD, ARM AArch64, POWER, and RISC-V the dinit init system A real mix. One could debate whether it is L… It's FOSS Communitynevj ❤️ With loveShare it with your Linux-using friends and encourage them to subscribe (hint: it's here). Share the articles in Linux Subreddits and community forums. Follow us on Google News and stay updated in your News feed. Opt for It's FOSS Plus membership and support us 🙏 Keep on enjoying Linux in 2025 🐧
-
Installing Listmonk - Self-hosted Newsletter and Mailing List Manager
As a tech-enthusiast content creator, I'm always on the lookout for innovative ways to connect with my audience and share my passion for technology and self-sufficiency. But as my newsletter grew in popularity, I found myself struggling with the financial burden of relying on external services like Mailgun - a problem many creators face when trying to scale their outreach efforts without sacrificing quality. That's when I discovered Listmonk, a free and open-source mailing list manager that not only promises high performance but also gives me complete control over my data. In this article, I'll walk you through how I successfully installed and deployed Listmonk locally using Docker, sharing my experiences and lessons learned along the way. I used Linode's cloud server to test the scenario. You may try either of Linode or DigitalOcean or your own servers. Customer Referral Landing Page - $100Cut Your Cloud Bills in Half Deploy more with Linux virtual machines, global infrastructure, and simple pricing. No surprise bills, no lock-in, and theLinodeGet started on Linode with a $100, 60-day credit for new users. DigitalOcean – The developer cloudHelping millions of developers easily build, test, manage, and scale applications of any size – faster than ever before.Explore our productsGet started on DigitalOcean with a $100, 60-day credit for new users. PrerequisitesBefore diving into the setup process, make sure you have the following: Docker and Docker Compose installed on your server.A custom domain that you want to use for Listmonk.Basic knowledge of shell commands and editing configuration files.If you are absolutely new to Docker, we have a course just for you: Learn Docker: Complete Beginner’s CourseLearn Docker, an important skill to have for any DevOps and modern sysadmin. Learn all the essentials of Docker in this series.Linux HandbookAbdullah TarekStep 1: Set up the project directoryThe first thing you need to do is create the directory where you'll store all the necessary files for Listmonk, I like an organized setup (helps in troubleshooting). In your terminal, run: mkdir listmonk cd listmonk This will set up a dedicated directory for Listmonk’s files. Step 2: Create the Docker compose fileListmonk has made it incredibly easy to get started with Docker. Their official documentation provides a detailed guide and even a sample docker-compose.yml file to help you get up and running quickly. Download the sample file to the current directory: curl -LO https://github.com/knadh/listmonk/raw/master/docker-compose.yml Here is the sample docker-compose.yml file, I tweaked some default environment variables: 💡It's crucial to keep your credentials safe! Store them in a separate .env file, not hardcoded in your docker-compose.yml. I know, I know, I did it for this tutorial... but you're smarter than that, right? 😉For most users, this setup should be sufficient but you can always tweak settings to your own needs. then run the container in the background: docker compose up -dOnce you've run these commands, you can access Listmonk by navigating to http://localhost:9000 in your browser. Setting up SSLBy default, Listmonk runs over HTTP and doesn’t include built-in SSL support. It is kinda important if you are running any service these days. So the next thing we need to do is to set up SSL support. While I personally prefer using Cloudflare Tunnels for SSL and remote access, this tutorial will focus on Caddy for its straightforward integration with Docker. Start by creating a folder named caddy in the same directory as your docker-compose.yml file: mkdir caddyInside the caddy folder, create a file named Caddyfile with the following content:th the following contents: listmonk.example.com { reverse_proxy app:9000 }Replace listmonk.example.com with your actual domain name. This tells Caddy to proxy requests from your domain to the Listmonk service running on port 9000. Ensure your domain is correctly configured in DNS. Add an A record pointing to your server's IP address (in my case, the Linode server's IP). If you’re using Cloudflare, set the proxy status to DNS only during the initial setup to let Caddy handle SSL certificates. Next, add the Caddy service to your docker-compose.yml file. Here’s the configuration to include: caddy: image: caddy:latest restart: unless-stopped container_name: caddy ports: - 80:80 - 443:443 volumes: - ./caddy/Caddyfile:/etc/caddy/Caddyfile - ./caddy/caddy_data:/data - ./caddy/caddy_config:/config networks: - listmonkThis configuration sets up Caddy to handle HTTP (port 80) and HTTPS (port 443) traffic, automatically obtain SSL certificates, and reverse proxy requests to the Listmonk container. Finally, restart your containers to apply the new settings: docker-compose restartOnce the containers are up and running, navigate to your domain (e.g., https://listmonk.example.com) in a browser. Caddy will handle the SSL certificate issuance and proxy the traffic to Listmonk seamlessly. Step 3: Accessing Listmonk webUIOnce Listmonk is up and running, it’s time to access the web interface and complete the initial setup. Open your browser and navigate to your domain or IP address where Listmonk is hosted. If you’ve configured HTTPS, the URL should look something like this: https://listmonk.yourdomain.com and you’ll be greeted with the login page. Click Login to proceed. Creating the admin userOn the login screen, you’ll be prompted to create an administrator account. Enter your email address, a username, and a secure password, then click Continue. This account will serve as the primary admin for managing Listmonk. Configure general settingsOnce logged in, navigate to Settings > Settings in the left sidebar. Under the General tab, customize the following: Site Name: Enter a name for your Listmonk instance.Root URL: Replace the default http://localhost:9000 with your domain (e.g., https://listmonk.yourdomain.com).Admin Email: Add an email address for administrative notifications.Click Save to apply these changes. Configure SMTP settingsTo send emails, you’ll need to configure SMTP settings: Click on the SMTP tab in the settings.Fill in the details:Host: smtp.emailhost.comPort: 465Auth Protocol: LoginUsername: Your email addressPassword: Your email password (or Gmail App password, generated via Google’s security settings)TLS: SSL/TLSClick Save to confirm the settings.Create a new campaign listNow, let’s create a list to manage your subscribers: Go to All Lists in the left sidebar and click + New.Give your list a name, set it to Public, and choose between Single Opt-In or Double Opt-In.Add a description, then click Save.Your newsletter subscription form will now be available at: https://listmonk.yourdomain.com/subscription/form With everything set up and running smoothly, it’s time to put Listmonk to work. You can easily import your existing subscribers, customize the look and feel of your emails, and even change the logo to match your brand. Final thoughtsAnd that’s it! You’ve successfully set up Listmonk, configured SMTP, and created your first campaign list. From here, you can start sending newsletters and growing your audience. I’m currently testing Listmonk for my own newsletter solution on my website, and while it’s a robust solution, I’m curious to see how it performs in a production environment. That said, I’m genuinely impressed by the thought and effort that Kailash Nadh and the contributors have put into this software, it’s a remarkable achievement. For any questions or challenges you encounter, the Listmonk GitHub page is an excellent resource and the developers are highly responsive. Finally, I’d love to hear your thoughts! Share your feedback, comments, or suggestions below. I’d love to hear about your experience with Listmonk and how you’re using it for your projects. Happy emailing! 📨 https://linuxhandbook.com/content/images/2025/01/listmon-self-hosting.png
-
Chris’ Corner: User Control
by: Chris Coyier Mon, 06 Jan 2025 20:47:37 +0000 Like Miriam Suzanne says: I like the idea of controlling my own experience when browsing and using the web. Bump up that default font size, you’re worth it. Here’s another version of control. If you publish a truncated RSS feed on your site, but the site itself has more content, I reserve the right to go fetch that content and read it through a custom RSS feed. I feel like that’s essentially the same thing as if I had an elaborate user stylesheet that I applied just to that website that made it look how I wanted it to look. It would be weird to be anti user-stylesheet. I probably don’t take enough control over my own experience on sites, really. Sometimes it’s just a time constraint where I don’t have the spoons to do a bunch of customization. But the spoon math changes when it has to do with doing my job better. I was thinking about this when someone poked me that an article I published had a wrong link in it. As I was writing it in WordPress, somehow I linked the link to some internal admin screen URL instead of where I was trying to link to. Worse, I bet I’ve made that same mistake 10 times this year. I don’t know what the heck the problem is (some kinda fat finger issue, probably) but the same problem is happening too much. What can help? User stylesheets can help! I love it when CSS helps me do my job in weird subtle ways better. I’ve applied this CSS now: .editor-visual-editor a[href*="/wp-admin/"]::after { content: " DERP!"; color: red; } That first class is just something to scope down the editor area in WordPress, then I select any links that have “wp-admin” in them, which I almost certainly do not want to be linking to, and show a visual warning. It’s a little silly, but it will literally work to stop this mistake I keep making. I find it surprising that only Safari has entirely native support for a linking up your own user CSS, but there are ways to do it via extension or other features in all browsers. Welp now that we’re talking about CSS I can’t help but share some of my favorite links in that area now. Dave put his finger on an idea I’m wildly jealous of: CSS wants to be a system. Yes! It so does! CSS wants to be a system! Alone, it’s just selectors, key/value pairs, and a smattering of other features. It doesn’t tell you how to do it, it is lumber and hardware saying build me into a tower! And also: do it your way! And the people do. Some people’s personality is: I have made this system, follow me, disciples, and embrace me. Other people’s personality is: I have also made a system, it is mine, my own, my prec… please step back behind the rope. Annnnnnd more. CSS Surprise Manga Lines from Alvaro are fun and weird and clever. Whirl: “CSS loading animations with minimal effort!” Jhey’s got 108 of them open sourced so far (like, 5 years ago, but I’m just seeing it.) Next-level frosted glass with backdrop-filter. Josh covers ideas (with credit all the way back to Jamie Gray) related to the “blur the stuff behind it” look. Yes, backdrop-filter does the heavy lifting, but there are SO MANY DETAILS to juice it up. Custom Top and Bottom CSS Container Masks from Andrew is a nice technique. I like the idea of a “safe” way to build non-rectangular containers where the content you put inside is actually placed safely.
-
The Importance of Investing in Soft Skills in the Age of AI
by: Andy Bell Mon, 06 Jan 2025 14:58:46 +0000 I’ll set out my stall and let you know I am still an AI skeptic. Heck, I still wrap “AI” in quotes a lot of the time I talk about it. I am, however, skeptical of the present, rather than the future. I wouldn’t say I’m positive or even excited about where AI is going, but there’s an inevitability that in development circles, it will be further engrained in our work. We joke in the industry that the suggestions that AI gives us are more often than not, terrible, but that will only improve in time. A good basis for that theory is how fast generative AI has improved with image and video generation. Sure, generated images still have that “shrink-wrapped” look about them, and generated images of people have extra… um… limbs, but consider how much generated AI images have improved, even in the last 12 months. There’s also the case that VC money is seemingly exclusively being invested in AI, industry-wide. Pair that with a continuously turbulent tech recruitment situation, with endless major layoffs and even a skeptic like myself can see the writing on the wall with how our jobs as developers are going to be affected. The biggest risk factor I can foresee is that if your sole responsibility is to write code, your job is almost certainly at risk. I don’t think this is an imminent risk in a lot of cases, but as generative AI improves its code output — just like it has for images and video — it’s only a matter of time before it becomes a redundancy risk for actual human developers. Do I think this is right? Absolutely not. Do I think it’s time to panic? Not yet, but I do see a lot of value in evolving your skillset beyond writing code. I especially see the value in improving your soft skills. What are soft skills? A good way to think of soft skills is that they are life skills. Soft skills include: communicating with others, organizing yourself and others, making decisions, and adapting to difficult situations. I believe so much in soft skills that I call them core skills and for the rest of this article, I’ll refer to them as core skills, to underline their importance. The path to becoming a truly great developer is down to more than just coding. It comes down to how you approach everything else, like communication, giving and receiving feedback, finding a pragmatic solution, planning — and even thinking like a web developer. I’ve been working with CSS for over 15 years at this point and a lot has changed in its capabilities. What hasn’t changed though, is the core skills — often called “soft skills” — that are required to push you to the next level. I’ve spent a large chunk of those 15 years as a consultant, helping organizations — both global corporations and small startups — write better CSS. In almost every single case, an improvement of the organization’s core skills was the overarching difference. The main reason for this is a lot of the time, the organizations I worked with coded themselves into a corner. They’d done that because they just plowed through — Jira ticket after Jira ticket — rather than step back and question, “is our approach actually working?” By focusing on their team’s core skills, we were often — and very quickly — able to identify problem areas and come up with pragmatic solutions that were almost never development solutions. These solutions were instead: Improving communication and collaboration between design and development teams Reducing design “hand-off” and instead, making the web-based output the source of truth Moving slowly and methodically to move fast Putting a sharp focus on planning and collaboration between developers and designers, way in advance of production work being started Changing the mindset of “plow on” to taking a step back, thoroughly evaluating the problem, and then developing a collaborative and by proxy, much simpler solution Will improving my core skills actually help? One thing AI cannot do — and (hopefully) never will be able to do — is be human. Core skills — especially communication skills — are very difficult for AI to recreate well because the way we communicate is uniquely human. I’ve been doing this job a long time and something that’s certainly propelled my career is the fact I’ve always been versatile. Having a multifaceted skillset — like in my case, learning CSS and HTML to improve my design work — will only benefit you. It opens up other opportunities for you too, which is especially important with the way the tech industry currently is. If you’re wondering how to get started on improving your core skills, I’ve got you. I produced a course called Complete CSS this year but it’s a slight rug-pull because it’s actually a core skills course that uses CSS as a context. You get to learn some iron-clad CSS skills alongside those core skills too, as a bonus. It’s definitely worth checking out if you are interested in developing your core skills, especially so if you receive a training budget from your employer. Wrapping up The main message I want to get across is developing your core skills is as important — if not more important — than keeping up to date with the latest CSS or JavaScript thing. It might be uncomfortable for you to do that, but trust me, being able to stand yourself out over AI is only going to be a good thing, and improving your core skills is a sure-fire way to do exactly that. The Importance of Investing in Soft Skills in the Age of AI originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
- Getting Started With Linux Terminal