Jump to content

Blogger

Blog Bot
  • Joined

  • Last visited

    Never

Everything posted by Blogger

  1. by: Suraj Kumar Fri, 07 Feb 2025 16:20:00 +0000 What is NoSQL, and what are the best NoSQL databases? These are the common questions that most companies and developers usually ask. Nowadays, the requirements for NoSQL databases are increasing as the traditional relational databases are not enough to handle the current requirements of the management. It is because now the companies have millions of customers and their details. Handling this colossal data is tough; hence it requires NoSQL. These databases are more agile and provide scalable features; also, they are a better choice to handle the vast data of the customers and find crucial insights. Thus, in this article, we will find out the best  NoSQL databases with the help of our list. What is NoSQL Database?If you belong to the data science field, you may have heard that NoSQL databases are non-relational databases. This may sound unclear, and it can become challenging to understand if you are just a fresher in this field. The NoSQL is the short representation of the Not Only SQL that may also mean it can handle relational databases. In this database, the data does not split into many other tables. It keeps it related in any way to make a single data structure. Thus, when there is vast data, the user does not have to experience the user lag. They also do not need to hire costly professionals who use critical techniques to present these data in the simplest form. But for this, the company needs to choose the best NoSQL database, and professionals also need to learn the same. 8 Best NoSQL Databases in 20241. Apache HBaseApache HBase is an open-source database, and it is a kind of Hadoop database. Its feature is that it can easily read and write the vast data that a company has stored. It is designed to handle the billions of rows and columns of the company’s data. This database is based on a big table: a distribution warehouse or data collection system developed to structure the data the company receives. This is in our list of best NoSQL databases because it has the functionality of scalability, consistent reading of data, and many more. 2. MongoDBMongoDB is also a great database based on general-purpose distribution and mainly developed for the developers who use the database for the cloud. It stores the data in documents such as JSON. It is a much powerful and efficient database available in the market. MongoDB supports various methods and techniques to analyze and interpret the data. You can search the graphs, text, and any geographical search. If you use it, then you also get an added advantage of high-level security of SSL, encryption, and firewalls. Thus it can also be the best NoSQL database to consider for your business and learning purpose. 3. Apache CouchDBIf you are looking for a database that offers easy access and storage features, you can consider Apache CouchDB. It is a single node database, and you can also get it for free as it is open source. You can also scale it when you think it fits, and it can store your data in the cluster of nodes and multiple the available servers. It has JSON data format support and an HTTP protocol that can integrate with the HTTP proxy of the servers. It is also a secure database that you can choose from because it is designed considering the crash-resistant feature. 4. Apache CassandraApache Cassandra is another beautiful open source and NoSQL database available currently. It was initially developed by Facebook but also got a promotion from Google. This database is available almost everywhere and also can scale as per the requirements of the users. It can smoothly handle the thousands of concurrent data requests every second and also handle the petabyte information or data. Including Facebook, Netflix, Coursera, and Instagram, more than 400 companies use Apache Cassandra NoSQL database. 5. OrientDBIt is also an ideal and open source NoSQL database that supports various models like a graph, document, and value model. This database is written in the programming language Java. It can show the relationship between managed records and the graph. It is a reliable and secure database suitable for large customer base users as well. Moreover, its graph edition is capable of visualizing and interacting with extensive data. 6. RavenDBRavenDB is a database that is based on the document format and has features of NoSQL. You can also use its ACID feature that ensures data integrity. It is a scalable database, and hence if you think your customer base is getting huge in millions, you can scale it as well. You can install it on permission and also use it in the cloud format with the cloud services offered by Azure and Web Services of Amazon. 7. Neo4jIf you were searching for a NoSQL database that can handle not only the data. But also a real relationship between them, then it is the perfect database for you. With this database, you can store the data safely and re-access those in such a fast and inconvenient manner. Every data stored contains a unique pointer. In this database, you also get the feature of Cypher Queries that gives you a much faster experience. 8. HypertableHypertable is also a NoSQL and open source database that is scalable and can appear in almost all relational DBs. It was mainly developed to solve scalability, and it is based on the Google Big Table. This database was written in the C++ programming language, and you can use it in Mac OS and Linux. It is suitable for managing big data and can use various techniques to short the available data. It can be a great choice if you expect to get maximum efficiency and cost efficiency from the database. ConclusionThus, in this article, we learned about some best NoSQL databases and those that are secure, widely available, widely used, and open source. Here we discussed the database, including MongoDB, OrientDB, Apache HBase, and Apache Cassandra. So, if you like this list of best NoSQL databases, comment down and mention the name of the NoSQL database that you think we have missed and that should be included. The post 8 Best NoSQL Databases in 2025 appeared first on The Crazy Programmer.
  2. Blogger posted a blog entry in Programmer's Corner
    by: Ruchi Mishra Fri, 07 Feb 2025 12:42:43 +0000 EXAMPLEARTICLECONTENT The post EXAMPLEARTICLE appeared first on The Crazy Programmer.
  3. by: Vishal Yadav Fri, 07 Feb 2025 08:56:00 +0000 In this article, you will find some best free HTML cheat sheets, which include all of the key attributes for lists, forms, text formatting, and document structure. Additionally, we will show you an image preview of the HTML cheat sheet.  What is HTML? HTML (Hyper Text Markup Language) is a markup language used to develop web pages. This language employs HTML elements to arrange web pages so that they will have a header, body, sidebar, and footer. HTML tags can also format text, embed images or attributes, make lists, and connect to external files. The last function allows you to change the page’s layout by including CSS files and other objects.  It is crucial to utilize proper HTML tags as an incorrect structure may break the web page. Worse, search engines may be unable to read the information provided within the tags. As HTML has so many tags, we have created a helpful HTML cheat sheet to assist you in using the language.  5 Best HTML Cheat Sheets Bluehost.com Bluehost’s website provides this informative infographic with some basic HTML and CSS coding information. The guide defines and explains the definitions and fundamental applications of HTML, CSS, snippets, tags, and hyperlinks. The graphic includes examples of specific codes that can be used to develop different features on a website. Link: https://www.bluehost.com/resources/html-css-cheat-sheet-infographic/ cheat-sheets.org This cheat sheet does an excellent job of summarising numerous common HTML code tags on a single page. There are tables for fundamental elements such as forms, text markups, tables, and objects. It is posted as an image file on the cheat-sheets.org website, making it simple to print or save the file for future reference. It is an excellent resource for any coder who wants a quick look at the basics. Link:  http://www.cheat-sheets.org/saved-copy/html-cheat-sheet.png Codeacademy.com The HTML cheat sheet from Codecademy is an easy-to-navigate, easy-to-understand guide to everything HTML. It has divided into sections such as Elements and Structure, Tables, Forms, and Semantic HTML, making it simple to discover the code you need to write just about anything in HTML. It also contains an explanation of each tag and how to utilize it. This cheat sheet is also printable if you prefer a hard copy to refer to while coding. Link: https://www.codecademy.com/learn/learn-html/modules/learn-html-elements/cheatsheet Digital.com The Digital website’s HTML cheat sheet is an excellent top-down reference for all key HTML tags included in the HTML5 standard. The sheet begins with explaining essential HTML components, followed by ten sections on content embedding and metadata. Each tag has a description, related properties, and a coding sample that shows how to use it. Link: https://digital.com/tools/html-cheatsheet/ websitesetup.org This basic HTML cheat sheet is presented as a single page that is easy to understand. Half of the data on this sheet is devoted to table formatting, with a detailed example of how to use these components. They also provide several download alternatives for the cheat sheet, including a colour PDF, a black and white PDF, and a JPG image file. Link: https://websitesetup.org/html5-cheat-sheet/ I hope this article has given you a better understanding of what an HTML cheat sheet is. The idea was to provide our readers with a quick reference guide to various frequently used HTML tags. If you have any queries related to HTML Cheat Sheets, please let us know in the comment section below.  The post 5 Best HTML Cheat Sheets 2025 appeared first on The Crazy Programmer.
  4. Blogger posted a blog entry in Programmer's Corner
    by: aiparabellum.com Fri, 07 Feb 2025 08:38:37 +0000 ContractCrab is an innovative AI-driven platform designed to simplify and revolutionize the way businesses handle contract reviews. By leveraging advanced artificial intelligence, this tool enables users to review contracts in just one click, significantly improving negotiation processes and saving both time and resources. Whether you are a legal professional, a business owner, or an individual needing efficient contract management, ContractCrab ensures accuracy, speed, and cost-effectiveness in handling your legal documents. Features of ContractCrab ContractCrab offers a wide range of features that cater to varied contract management needs: AI Contract Review: Automatically analyze contracts for key clauses and potential risks. Contract Summarizer: Generate concise summaries to focus on the essential points. AI Contract Storage: Securely store contracts with end-to-end encryption. Contract Extraction: Extract key information and clauses from lengthy documents. Legal Automation: Automate repetitive legal processes for enhanced efficiency. Specialized Reviews: Provides tailored reviews for employment contracts, physician agreements, and more. These features are designed to reduce manual effort, improve contract comprehension, and ensure legal accuracy. How It Works Using ContractCrab is straightforward and user-friendly: Upload the Contract: Begin by uploading your document in .pdf, .docx, or .txt format. Review the Details: The AI analyzes the content, identifies redundancies, and highlights key sections. Manage the Changes: Accept or reject AI-suggested modifications to suit your requirements. Enjoy the Result: Receive a concise, legally accurate contract summary within seconds. This seamless process ensures that contracts are reviewed quickly and effectively, saving you time and effort. Benefits of ContractCrab ContractCrab provides numerous advantages to its users: Time-Saving: Complete contract reviews in seconds instead of days. Cost-Effective: With pricing as low as $3 per hour, it is far more affordable than hiring legal professionals. Accuracy: Eliminates human errors caused by fatigue or inattention. 24/7 Availability: Accessible anytime, eliminating scheduling constraints. Enhanced Negotiations: Streamlines the process, enabling users to focus on critical aspects of agreements. Data Security: Ensures end-to-end encryption and regular data backups for maximum protection. These benefits make ContractCrab an indispensable tool for businesses and individuals alike. Pricing ContractCrab offers competitive and transparent pricing plans: Starting at $3 per hour: Ideal for quick and efficient reviews. Monthly Subscription at $30: Provides unlimited access to all features. This affordability ensures that businesses of all sizes can leverage the platform’s advanced AI capabilities without overspending. Review ContractCrab has received positive feedback from professionals and users across industries: Ellen Hernandez, Contract Manager: “The most attractive pricing on the legal technology market. Excellent value for the features provided.” William Padilla, Chief Security Officer: “Promising project ahead. Looking forward to the launch!” Jonathan Quinn, Personal Assistant: “Top-tier process automation. It’s great to pre-check any document before it moves to the next step.” These testimonials highlight ContractCrab’s potential to transform contract management with its advanced features and affordability. Conclusion ContractCrab stands out as a cutting-edge solution for AI-powered contract review, offering exceptional accuracy, speed, and cost-efficiency. Its user-friendly interface and robust features cater to diverse needs, making it an indispensable tool for businesses and individuals. With pricing as low as $3 per hour, ContractCrab ensures accessibility without compromising quality. Whether you are managing employment contracts, legal agreements, or construction documents, this platform simplifies the process, enhances readability, and mitigates risks effectively. Visit Website The post ContractCrab appeared first on AI Parabellum.
  5. Blogger posted a blog entry in Programmer's Corner
    by: aiparabellum.com Fri, 07 Feb 2025 08:38:37 +0000 ContractCrab is an innovative AI-driven platform designed to simplify and revolutionize the way businesses handle contract reviews. By leveraging advanced artificial intelligence, this tool enables users to review contracts in just one click, significantly improving negotiation processes and saving both time and resources. Whether you are a legal professional, a business owner, or an individual needing efficient contract management, ContractCrab ensures accuracy, speed, and cost-effectiveness in handling your legal documents. Features of ContractCrab ContractCrab offers a wide range of features that cater to varied contract management needs: AI Contract Review: Automatically analyze contracts for key clauses and potential risks. Contract Summarizer: Generate concise summaries to focus on the essential points. AI Contract Storage: Securely store contracts with end-to-end encryption. Contract Extraction: Extract key information and clauses from lengthy documents. Legal Automation: Automate repetitive legal processes for enhanced efficiency. Specialized Reviews: Provides tailored reviews for employment contracts, physician agreements, and more. These features are designed to reduce manual effort, improve contract comprehension, and ensure legal accuracy. How It Works Using ContractCrab is straightforward and user-friendly: Upload the Contract: Begin by uploading your document in .pdf, .docx, or .txt format. Review the Details: The AI analyzes the content, identifies redundancies, and highlights key sections. Manage the Changes: Accept or reject AI-suggested modifications to suit your requirements. Enjoy the Result: Receive a concise, legally accurate contract summary within seconds. This seamless process ensures that contracts are reviewed quickly and effectively, saving you time and effort. Benefits of ContractCrab ContractCrab provides numerous advantages to its users: Time-Saving: Complete contract reviews in seconds instead of days. Cost-Effective: With pricing as low as $3 per hour, it is far more affordable than hiring legal professionals. Accuracy: Eliminates human errors caused by fatigue or inattention. 24/7 Availability: Accessible anytime, eliminating scheduling constraints. Enhanced Negotiations: Streamlines the process, enabling users to focus on critical aspects of agreements. Data Security: Ensures end-to-end encryption and regular data backups for maximum protection. These benefits make ContractCrab an indispensable tool for businesses and individuals alike. Pricing ContractCrab offers competitive and transparent pricing plans: Starting at $3 per hour: Ideal for quick and efficient reviews. Monthly Subscription at $30: Provides unlimited access to all features. This affordability ensures that businesses of all sizes can leverage the platform’s advanced AI capabilities without overspending. Review ContractCrab has received positive feedback from professionals and users across industries: Ellen Hernandez, Contract Manager: “The most attractive pricing on the legal technology market. Excellent value for the features provided.” William Padilla, Chief Security Officer: “Promising project ahead. Looking forward to the launch!” Jonathan Quinn, Personal Assistant: “Top-tier process automation. It’s great to pre-check any document before it moves to the next step.” These testimonials highlight ContractCrab’s potential to transform contract management with its advanced features and affordability. Conclusion ContractCrab stands out as a cutting-edge solution for AI-powered contract review, offering exceptional accuracy, speed, and cost-efficiency. Its user-friendly interface and robust features cater to diverse needs, making it an indispensable tool for businesses and individuals. With pricing as low as $3 per hour, ContractCrab ensures accessibility without compromising quality. Whether you are managing employment contracts, legal agreements, or construction documents, this platform simplifies the process, enhances readability, and mitigates risks effectively. Visit Website The post ContractCrab appeared first on AI Parabellum.
  6. by: Sreenath On our Arch installation video, a viewer requested a tutorial on installing Arch but with BTRFS and with encryption enabled. And hence this tutorial came into existence. I am using the official archinstall script. Though a command line tool, this guided installer allows even a moderate system user to enjoy the "greatness" of Arch Linux. 🚧 The method discussed here wipes out the existing operating system(s) from your computer and installs Arch Linux on it. So if you are going to follow this tutorial, make sure that you have backed up your files externally, or else you’ll lose all of them. You have been warned! RequirementsHere's what I recommend for this tutorial: An x86_64 (i.e. 64 bit) compatible machine Minimum 2 GB of RAM (recommended 4-8 GB, depending upon the desktop environment or window manager you choose) At least 10 GB of free disk space (recommended 20 GB for basic usage with a desktop environment) An active internet connection A USB drive with a minimum 4 GB of storage capacity Familiarity with the Linux command line Once you have made sure you have all the requirements, let’s install Arch Linux. Step 1: Download the Arch Linux ISODownload the ISO from the official website. Both direct download and torrent links are available. Download Arch Linux Step 2: Create a live USB of Arch LinuxYou will have to create a live USB of Arch Linux from the ISO you just downloaded. You may use the Etcher GUI tool to create the live USB. It is available for both Windows and Linux. Etcher Live USB creation Alternatively, if you are on Linux, you can use the dd command to create a live USB. Replace /path/to/archlinux.iso with the path where you have downloaded the ISO file, and /dev/sdx with your USB drive in the example below. You can get your drive information using lsblk command. dd bs=4M if=/path/to/archlinux.iso of=/dev/sdx status=progress && sync Basically, choose any live USB creation tool you like. Step 3: Boot from the live USB🚧 Do note that in some cases, you may not be able to boot from live USB with secure boot enabled. If that’s the case with you, disable secure boot first. Once you have created a live USB for Arch Linux, shut down your PC. Plug in your USB and boot your system. While booting, keep pressing F2, F10 or F12 key (depending upon your system) to access UEFI boot settings. Here, select to boot from USB or removable disk. Once you do that and the system boots, you should see an option like this: UEFI Boot Screen Select Arch Linux UEFI (x86_64) option to start the live medium. 📋 Legacy BIOS users should select the x86_64 BIOS option. Step 4: Connect to Wi-FiYou need an active internet connection for installing Arch Linux. If you have wired connection, good. Else, you need to make some effort to connect to your Wi-Fi before starting the archinstall script. First, in the Arch Linux live prompt, enter the command: iwctl This Internet Wireless daemon control is used to enrol Wi-Fi connection to your system. As soon as you enter the command, you can see that the prompt has changed to iwd. Here, you need to list devices to get the name of your wireless hardware device. List network devices In the above screenshot, you can see the name of my Wi-Fi device is wlan0. Now, use this device to scan available Wi-Fi connections in the vicinity. station wlan0 scan station wlan0 get-networks Connect to a Wi-Fi This will print the name of the Wi-Fi services available. Note the “Network Name”. To connect to the network, use the command: station wlan0 connect "Network Name" This will ask you to enter the Wi-Fi password. Enter it and you should be now connected to internet. Exit the iwd prompt using CTRL+D. You can check if the network is functioning using the ping command: ping google.com Ping Google Step 5: Pacman download settingsBefore starting the archinstall script, let's change the download limit of pacman. Edit the pacman configuration using: nano /etc/pacman.conf Here, uncomment the parellelDownload option and set a value according to your internet speed. If you have a decent internet speed, set the parallel download count to 10. 📋 On my test system, I needed to run pacman -Sy and then pacman -S archlinux-keyring (install Arch Linux keyrings) before starting the installer. Otherwise, the installer crashed with some errors. You may also need to read carefully what the prompt error says. Step 6: Start Archinstall scriptWith the network connection ready, let's start the archinstall script with the command below: archinstall This will start the text-based arch installation script. Archinstall script Set the installation languageThe first setting in the installer is the installation language. This option sets what language is used in the Terminal User Interface. The latest archinstall provides a percentage value corresponding to each language, that describes how much translation has been completed. Installation language I will be going with the default English. Locale SettingsYou should set your locale and keyboard settings. Here, if you are OK with the defaults, you can skip to the next setting. 💡 Some programs like Rofi launcher may not launch if your locale is different from en_US. So, adding en_US as a locale is a good thing to avoid future headaches. Set keyboard and locale settings To change a setting, press the enter key to go inside and select individual items. Inside locale settings Mirror settingsPress the enter key on the Mirrors in the main menu of archinstall script. This will bring you to the mirror selection section. Enter the Mirror Region. Select the Mirror Region option. This will provide a list of countries. You can select a country near your location for a faster network. Mirror Countries (Click to expand the image) 💡 Use the "/" key to start a search. TAB key to select/mark an entry. Once multiple entries are marked, use the ENTER key to set those countries as mirrors. The mirrors from selected countries will be listed. Move to “Back” and click enter. Country-wise mirror list Disk ConfigurationNow, you need to partition your disk. The archinstall has a neat mechanism to help you here. On the main menu, select “Disk Partition”. Inside this, select “Partitioning”. Select Partitioning option Here, use the option “Use a best-effort default partition layout”. Best-effort partitioning In the next dialog, use the TAB key to select your hard disk device and press the ENTER key. Select Disk Choose a partition type. Here, I am going with BTRFS partition. You can pick EXT4, a very well-tested file system, or XFS, f2fs etc. Select BTRFS File System On the next screen, you will be asked to use a default subvolume structure or not. Let's say you select “Yes”. Create Subvolumes True You will be asked to pick compression or disable copy-on-write. It is advised to select Compression, to enable a Zstd compression. Use Compression option This will create a partition for you, with subvolumes for /, /home, /var/log, /var/cache/pacman/pkg, and /.snapshots. Subvolume listing (Click to expand the image) 📋 Subvolumes are beneficial for users who want a granular control and use features like snapshots extensively. If you are using a simple system, and not going to use such features, you can choose to avoid the subvolumes. For this, pick “No” for BTRFS default subvolumes. Subvolume choice On the next screen, you should select “Use Compression” option. Thus, you will get a simple partition for the system. Simple no-subvolume partition. Use the "Back" button to go to the installer main menu. Disk Encryption🚧 Disk encryption may introduce slight performance delay to the system. If your system is a casual home PC or an alternative system with no critical data, you can ignore the encryption. Select the Disk Encryption option from the main menu. On the dialog box, select Encryption type and pick LUKS. This will enable two other fields; Encryption password and Partition. Fill the fields. Select the partitions that need to be encrypted using the TAB key. Encryption overview (Click to expand the image) 🚧 Do not forget the encryption password. If you do, you'll lose access to the data on disk and formatting the entire operating system will be the only option for you. SwapSwap on zram is enabled by default in the installer. If needed, you can disable it. BootloaderBy default, it is set to systemd-boot. This is a simple bootloader for those who expect simplicity. If you require familar functionality, go for GRUB bootloader. Select Grub Bootloader HostnameYou can configure hostname here. By default, it is archlinux. Root passwordNext is Root password. Select it using enter key. Then enter and confirm a strong root password. Root Password Setting User creationIt is important to create a regular user account other than root account. This is for day-to-day purposes. On User section, select "Add a user" option. Click on "Add a user" Here, enter the username. Enter username Now, enter a password. Password for user Confirm it by entering again when prompted. You will be asked whether the user a superuser or not. Make the created user superuser (administrative privileges) by selecting the “Yes” option. Admin privileges to regular user Now, use the "Confirm and exit" option. Exit user creation Profile (Desktop selection)The “Profile” field in the installer is where we will set desktop environments. Select Profile → Type. Here, select the Desktop option. Select Desktop Option On the next screen, select a desktop (desktops) using the TAB key and press enter. 🚧 Try to avoid installing multiple heavy desktops in one system. Like KDE Plasma and GNOME in one system is not recommended. Select GNOME Desktop 💡 You can choose one desktop like GNOME/Plasma and then choose one tiling window manager, making it install two desktop options. Selecting a desktop and pressing enter will bring you to the driver selection settings. For the test system, the installer automatically assigned all open-source drivers. Driver packages You can enter the “Graphics driver” settings and decide appropriate driver packs. Available drivers are listed Normally, you should not be doing anything on the greeter, as it will be automatically selected (GDM for GNOME, SDDM for KDE Plasma etc.) Audio settingsFor Audio settings, you can select Pipewire or pulse audio. Select Pipewire KernelYou can either go with the default Linux kernel or select multiple kernels. Learn more about kernel options in Arch Linux. The screenshot below shows two kernels selected, linux and linux-lts. Kernel selection Network ConfigurationIn the Network Configuration settings, select "Use NetworkManager" option. Use NetworkManager Additional PackagesIf you need to install additional packages to your system, you can do it at the installation stage itself. Press enter key on “Additional package” option in main menu. Now, just enter the proper name of the packages you want to install, separated with space. In the screenshot below, packages like firefox, htop, fastfetch, and starship are added. Specify additional packages Optional RepositoriesYou can enable multilib repositories using this setting. Select items using the TAB key and press enter. Learn about various Arch repos here. Additional Repositories TimezoneSearch and set the timezone based on your location. Asia/Kolkata for Indian Standard Time, US/Central for central timezone etc. Timezone settings Automatic Time Sync with NTP will be automatically enabled, and no need to change. Start the actual installOnce all the settings have been done, you can use the Install option to start the installation procedure. Use Install button You will be asked to verify the installation configurations you have set. Once satisfied, enter on “Yes” option. Confirm installation (Click to expand the image) The process will be started, and you need to wait for some time to finish all the downloads and installations. Step 6: Post InstallationOnce the archinstall script finishes, it will ask you to chroot into the system for further settings. You can give NO to the question if you have nothing planned to do. No chroot enter You can now shut down the system. shutdown now Shutdown the system Once the system is shut down, remove the USB device from the port and boot the system. This will bring you to the encryption page, if you have enabled encryption. Enter the password you have set. Enter encryption password You will reach the login page. Enter the password to log in to your system. Log in to the system Enjoy Arch Linux with BTRFS and encrypted drive.
  7. by: Geoff Graham Thu, 06 Feb 2025 15:29:35 +0000 A little gem from Kevin Powell’s “HTML & CSS Tip of the Week” website, reminding us that using container queries opens up container query units for sizing things based on the size of the queried container. So, 1cqi is equivalent to 1% of the container’s inline size, and 1cqb is equal to 1% of the container’s block size. I’d be remiss not to mention the cqmin and cqmax units, which evaluate either the container’s inline or block size. So, we could say 50cqmax and that equals 50% of the container’s size, but it will look at both the container’s inline and block size, determine which is greater, and use that to calculate the final computed value. That’s a nice dash of conditional logic. It can help maintain proportions if you think the writing mode might change on you, such as moving from horizontal to vertical. Container query units: cqi and cqb originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  8. by: Abhishek Prakash Recently, I bought an Asus Zenbook and dual booted it with Ubuntu. But Ubuntu 24.04 didn't perform well on the new hardware and thus I removed Ubuntu from dual boot. This is typically done by moving the Windows boot manager up the boot order and deleting the Linux partition from within Windows. The annoyance is that Linux will still show up in the UEFI boot settings. Although it doesn't hurt to leave it there, it triggers some sort of OCD in me to get a pristine system without unnecessary stuff. And hence, I went out to 'fix this non-issue' and I am going to share how you can do the same if you like. The process is composed of these two steps: Mount the EFI system partition (ESP) in Windows (has to be command line) Delete the Ubuntu/Linux entry from the EFI folder using either command line or GUI 📋 Again, the Linux entry in the UEFI boot menu is not a blocking issue and you can leave it as it is to use only Windows on the system. Step 1: "Mount" the EFI partition in WindowsPress the Windows start button and look for CMD. Right click on it and select "Run as administrator". Once the command prompt is open, start the disk partition utility by entering: diskpartType "list disk" to list all the disks present on your system and get the name of the disk where the EFI partition is located. list diskIf you have only one disk, it should show only one entry. Enter the disk to see all the partitions on this disk: select disk 0You should see 'Disk 0 is now the selected disk' in the ouput. Now, list all the partitions on this disk with: list partitionUsually, it is the system partition that is EFI partiton and as you can see in the screenshot below, it is the partition number 1. 🚧 Since my ESP (EFI System Partition) has assigned number 1, I'll select this partition. Yours could be different, so pay attention. select partition 1Now, assign it a drive letter. Since C, D, E etc are usually taken, let's go to the end of the alphabet and use the letter x here. assign letter x With the EFI partition getting a driver letter, you can now see it in the file explorer like C or D drives. Basically, all this hassle for mounting the ESP partition. Anyway, exit the disk partition tool: exitStep 2: Delete Linux folder from EFITill here, we were not doing anything risky. But now, we have to delete the Ubuntu Linux folder from the EFI partition. This can be done via graphically as well as via command line. You used the command line above but for 'deleting' something, I would recommend using the graphical method. Method 1: Use GUIOpen the task manager in Windows (Ctrl+Alt+Del) and here, click the 'Run new task': This will give you the option to create new task. What you have to do here is to click on the "browse" button: You can now browse the partitions and the files inside them. Using this, you can add or delete files and folders. Browse to drive X and the EFI folder. You should see ubuntu (or whichever distro you used) listed here. Select it first and then right click to see the option to delete it. I could not take a screenshot of it as Window's built-in tool didn't allow taking screenshots of the right-click context menu. Once you hit the delete option, a conformation dialogue box will pop up. Select yes and close the browser and then close task manager as well. Congratulations! Now if you access the UEFI settings from Windows, you won't see the Linux entry anymore. Command line warrior? Let's see the other method for you. Method 2: Use command line📋 You need to perform all this in command prompt running as administrator. Use this command to enter the drive you had mounted earlier. Mind the colon after the drive letter. x:See the content of the directory with: dirIt should show a folder named EFI. Enter this directory: cd EFIAnd now look at the content of this folder: dirYou should see some folder belonging to Linux. It could be named Ubuntu, Fedora etc. The next step is to use the rd command (remove directory) with the Linux folder's name to delete it: rd ubuntu /sOnce done, exit the command prompt by typing exit. ConclusionThe ESP partition mounted as drive X won't be there anymore when you restart the system. And neither will be the Linux boot entry. In a YouTube video, I discussed uninstalling Ubuntu from the dual boot system, I mentioned the fact that a leftover Ubuntu entry in the boot doesn't hurt. Still, a few comments indicated that they would like everything cleaned up. Hence, this tutorial. 💬 Is it worth the hassle to clean up the Linux boot entry after removing it from dual boot? Share it in the comments, please.
  9. by: Abhishek Prakash The brilliance and curiosity of some people amazes me. Take this person who managed to run Linux inside a PDF file 🫡 Wow! You Can Now Run Linux Inside a PDF Yes, you read that right. It's FOSS NewsSourav Rudra 💬 Let's see what else you get in this edition Debian logging off X/Twitter. Installing DeepSeek R1 locally on Linux. Doom running on Android 16's Linux Terminal. And other Linux news, tips and, of course, memes! This edition of FOSS Weekly is supported by PikaPods. ❇️ PikaPods: Enjoy Self-hosting Hassle-freePikaPods allows you to quickly deploy your favorite open source software. All future updates are handled automatically by PikaPods while you enjoy using the software. PikaPods also share revenue with the original developers of the software. You get a $5 free credit to try it out and see if you can rely on PikaPods. PikaPods - Instant Open Source App Hosting Run the finest Open Source web apps from $1/month, fully managed, no tracking, no ads, full privacy. Self-hosting was never this convenient. Instant Open Source App Hosting 📰 Linux and Open Source NewsAn official Arch Linux WSL image is in the works. Debian has stopped publishing content on X/Twitter and it created a controversy. Running Doom on Android 16's Linux Terminal is possible. Fedora's KDE Plasma Desktop has been promoted to Edition. The sole maintainer of the Linux wireless networking drivers has stepped down. GTK set to drop support for X11 and Broadway GTK Drops X11! And Broadway Support, Lays the Foundation for New Android Backend It's FOSS NewsSourav Rudra 🧠 What We’re Thinking AboutThe Video Game History Foundation is giving people early access to their archive of research materials on video game history. The VGHF Library opens in early access | Video Game History Foundation For free. For everyone. Wherever you are. Video Game History FoundationPhil Salvador Development of Serpent OS has slowed down considerably due to funding issues. 🧮 Linux Tips, Tutorials and MoreIt's FOSS Community member, Bhuwan, shares how you can use Emacs as your terminal multiplexer on Windows. Get started with Rust programming with our Rust Basics series. Linux books I own and recommend. A common issue these days is missing Windows from Grub in a fresh dual boot setup. The fix is easy. Missing Windows from Grub After Dual Boot? Here’s What You Can Do Can’t see Windows in Grub after successfully dual booting? That’s because os-prober is disabled. Here’s how to fix it. It's FOSSAbhishek Prakash 👷 Maker's and AI CornerThe ArmSoM CM5 is a compact powerhouse that's built around the RK3576 SoC. ArmSoM CM5: Powerful Replacement for Raspberry Pi CM4 ArmSoM’s Compute Module 5 is an impactful Rockchip device with impressive hardware specs. And yes, it can support Raspberry Pi IO boards. It's FOSSAbhishek Kumar Also, learn to install DeepSeek locally on Linux. ✨ Apps highlightBored of Chrome and Firefox? Maybe this browser can help: I tried this non-Chrome Open-Source Web Browser (And You Should Too!) It’s time we got to try an open-source browser not based on Chrome. Zen Browser is the hero we didn’t know we needed. It's FOSS NewsSourav Rudra 🛍️ Deal You Would Love15 Linux and DevOps books for just $18 plus your purchase supports Code for America organization. Get them on Humble Bundle. Humble Tech Book Bundle: Linux from Beginner to Professional by O’Reilly Learn Linux with ease using this library of coding and programming courses by O’Reilly. Pay what you want & support Code For America. Humble Bundle 📽️ Video I am creating for youSubscribe to It's FOSS YouTube Channel 🧩 Quiz TimeReady to jog your memory of 90s video games? 90’s Retro Rewind Memory Game Know the classics? See them and memorize the titles to uncover them as a pair. It's FOSSAnkush Das 💡 Quick Handy TipIn Nautilus File Manager (GNOME Files), you can go up and down directories using Alt + Up/Down Arrow Keys. Similarly, you can use Alt + Left/Right Arrow Key to go back and forward. 🤣 Meme of the WeekThe funny little penguin is why some of us switched to Linux! 🐧 🗓️ Tech TriviaKen Thompson, co-creator of UNIX, was born on February 4, 1943. UNIX was developed at AT&T Bell Labs and took inspiration from Multics, the first multi-user, multitasking OS. It quickly gained popularity among engineers and scientists. 🧑‍🤝‍🧑 FOSSverse CornerPro FOSSer, Jimmy, shares an interesting case where a popular animation-focused YouTuber has moved to Linux Mint years after being exploited by Adobe. Animator James Lee on dropping Adobe and switching to Linux Hey everyone! A few weeks ago I saw this video by James Lee, and I forgot to post it here. If you aren’t familiar, James Lee is an Australian animator who has been somewhat popular on Youtube. He’s probably most famous for a collaboration he did with streamer Cr1TiKaL, but I have watched several of his videos before and he’s very funny. Also, his animations have a very unique and striking art style that I really like. As a small warning, most of his videos have swear words. It personally does… It's FOSS CommunityAkatama ❤️ With loveShare it with your Linux-using friends and encourage them to subscribe (hint: it's here). Share the articles in Linux Subreddits and community forums. Follow us on Google News and stay updated in your News feed. Opt for It's FOSS Plus membership and support us 🙏 Enjoy FOSS 😄
  10. by: Geoff Graham Wed, 05 Feb 2025 14:58:18 +0000 You know about Baseline, right? And you may have heard that the Chrome team made a web component for it. Here it is! Of course, we could simply drop the HTML component into the page. But I never know where we’re going to use something like this. The Almanac, obs. But I’m sure there are times where embedded it in other pages and posts makes sense. That’s exactly what WordPress blocks are good for. We can take an already reusable component and make it repeatable when working in the WordPress editor. So that’s what I did! That component you see up there is the <baseline-status> web component formatted as a WordPress block. Let’s drop another one in just for kicks. Pretty neat! I saw that Pawel Grzybek made an equivalent for Hugo. There’s an Astro equivalent, too. Because I’m fairly green with WordPress block development I thought I’d write a bit up on how it’s put together. There are still rough edges that I’d like to smooth out later, but this is a good enough point to share the basic idea. Scaffolding the project I used the @wordpress/create-block package to bootstrap and initialize the project. All that means is I cd‘d into the /wp-content/plugins directory from the command line and ran the install command to plop it all in there. npm install @wordpress/create-block The command prompts you through the setup process to name the project and all that. The baseline-status.php file is where the plugin is registered. And yes, it’s looks completely the same as it’s been for years, just not in a style.css file like it is for themes. The difference is that the create-block package does some lifting to register the widget so I don’t have to: <?php /** * Plugin Name: Baseline Status * Plugin URI: https://css-tricks.com * Description: Displays current Baseline availability for web platform features. * Requires at least: 6.6 * Requires PHP: 7.2 * Version: 0.1.0 * Author: geoffgraham * License: GPL-2.0-or-later * License URI: https://www.gnu.org/licenses/gpl-2.0.html * Text Domain: baseline-status * * @package CssTricks */ if ( ! defined( 'ABSPATH' ) ) { exit; // Exit if accessed directly. } function csstricks_baseline_status_block_init() { register_block_type( __DIR__ . '/build' ); } add_action( 'init', 'csstricks_baseline_status_block_init' ); ?> The real meat is in src directory. The create-block package also did some filling of the blanks in the block-json file based on the onboarding process: { "$schema": "https://schemas.wp.org/trunk/block.json", "apiVersion": 2, "name": "css-tricks/baseline-status", "version": "0.1.0", "title": "Baseline Status", "category": "widgets", "icon": "chart-pie", "description": "Displays current Baseline availability for web platform features.", "example": {}, "supports": { "html": false }, "textdomain": "baseline-status", "editorScript": "file:./index.js", "editorStyle": "file:./index.css", "style": "file:./style-index.css", "render": "file:./render.php", "viewScript": "file:./view.js" } Going off some tutorials published right here on CSS-Tricks, I knew that WordPress blocks render twice — once on the front end and once on the back end — and there’s a file for each one in the src folder: render.php: Handles the front-end view edit.js: Handles the back-end view The front-end and back-end markup Cool. I started with the <baseline-status> web component’s markup: <script src="https://cdn.jsdelivr.net/npm/baseline-status@1.0.8/baseline-status.min.js" type="module"></script> <baseline-status featureId="anchor-positioning"></baseline-status> I’d hate to inject that <script> every time the block pops up, so I decided to enqueue the file conditionally based on the block being displayed on the page. This is happening in the main baseline-status.php file which I treated sorta the same way as a theme’s functions.php file. It’s just where helper functions go. // ... same code as before // Enqueue the minified script function csstricks_enqueue_block_assets() { wp_enqueue_script( 'baseline-status-widget-script', 'https://cdn.jsdelivr.net/npm/baseline-status@1.0.4/baseline-status.min.js', array(), '1.0.4', true ); } add_action( 'enqueue_block_assets', 'csstricks_enqueue_block_assets' ); // Adds the 'type="module"' attribute to the script function csstricks_add_type_attribute($tag, $handle, $src) { if ( 'baseline-status-widget-script' === $handle ) { $tag = '<script type="module" src="' . esc_url( $src ) . '"></script>'; } return $tag; } add_filter( 'script_loader_tag', 'csstricks_add_type_attribute', 10, 3 ); // Enqueues the scripts and styles for the back end function csstricks_enqueue_block_editor_assets() { // Enqueues the scripts wp_enqueue_script( 'baseline-status-widget-block', plugins_url( 'block.js', __FILE__ ), array( 'wp-blocks', 'wp-element', 'wp-editor' ), false, ); // Enqueues the styles wp_enqueue_style( 'baseline-status-widget-block-editor', plugins_url( 'style.css', __FILE__ ), array( 'wp-edit-blocks' ), false, ); } add_action( 'enqueue_block_editor_assets', 'csstricks_enqueue_block_editor_assets' ); The final result bakes the script directly into the plugin so that it adheres to the WordPress Plugin Directory guidelines. If that wasn’t the case, I’d probably keep the hosted script intact because I’m completely uninterested in maintaining it. Oh, and that csstricks_add_type_attribute() function is to help import the file as an ES module. There’s a wp_enqueue_script_module() action available to hook into that should handle that, but I couldn’t get it to do the trick. With that in hand, I can put the component’s markup into a template. The render.php file is where all the front-end goodness resides, so that’s where I dropped the markup: <baseline-status <?php echo get_block_wrapper_attributes(); ?> featureId="[FEATURE]"> </baseline-status> That get_block_wrapper_attibutes() thing is recommended by the WordPress docs as a way to output all of a block’s information for debugging things, such as which features it ought to support. [FEATURE]is a placeholder that will eventually tell the component which web platform to render information about. We may as well work on that now. I can register attributes for the component in block.json: "attributes": { "showBaselineStatus": { "featureID": { "type": "string" } }, Now we can update the markup in render.php to echo the featureID when it’s been established. <baseline-status <?php echo get_block_wrapper_attributes(); ?> featureId="<?php echo esc_html( $featureID ); ?>"> </baseline-status> There will be more edits to that markup a little later. But first, I need to put the markup in the edit.js file so that the component renders in the WordPress editor when adding it to the page. <baseline-status { ...useBlockProps() } featureId={ featureID }></baseline-status> useBlockProps is the JavaScript equivalent of get_block_wrapper_attibutes() and can be good for debugging on the back end. At this point, the block is fully rendered on the page when dropped in! The problems are: It’s not passing in the feature I want to display. It’s not editable. I’ll work on the latter first. That way, I can simply plug the right variable in there once everything’s been hooked up. Block settings One of the nicer aspects of WordPress DX is that we have direct access to the same controls that WordPress uses for its own blocks. We import them and extend them where needed. I started by importing the stuff in edit.js: import { InspectorControls, useBlockProps } from '@wordpress/block-editor'; import { PanelBody, TextControl } from '@wordpress/components'; import './editor.scss'; This gives me a few handy things: InspectorControls are good for debugging. useBlockProps are what can be debugged. PanelBody is the main wrapper for the block settings. TextControl is the field I want to pass into the markup where [FEATURE] currently is. editor.scss provides styles for the controls. Before I get to the controls, there’s an Edit function needed to use as a wrapper for all the work: export default function Edit( { attributes, setAttributes } ) { // Controls } First is InspectorTools and the PanelBody: export default function Edit( { attributes, setAttributes } ) { // React components need a parent element <> <InspectorControls> <PanelBody title={ __( 'Settings', 'baseline-status' ) }> // Controls </PanelBody> </InspectorControls> </> } Then it’s time for the actual text input control. I really had to lean on this introductory tutorial on block development for the following code, notably this section. export default function Edit( { attributes, setAttributes } ) { <> <InspectorControls> <PanelBody title={ __( 'Settings', 'baseline-status' ) }> // Controls <TextControl label={ __( 'Feature', // Input label 'baseline-status' ) } value={ featureID || '' } onChange={ ( value ) => setAttributes( { featureID: value } ) } /> </PanelBody> </InspectorControls> </> } Tie it all together At this point, I have: The front-end view The back-end view Block settings with a text input All the logic for handling state Oh yeah! Can’t forget to define the featureID variable because that’s what populates in the component’s markup. Back in edit.js: const { featureID } = attributes; In short: The feature’s ID is what constitutes the block’s attributes. Now I need to register that attribute so the block recognizes it. Back in block.json in a new section: "attributes": { "featureID": { "type": "string" } }, Pretty straightforward, I think. Just a single text field that’s a string. It’s at this time that I can finally wire it up to the front-end markup in render.php: <baseline-status <?php echo get_block_wrapper_attributes(); ?> featureId="<?php echo esc_html( $featureID ); ?>"> </baseline-status> Styling the component I struggled with this more than I care to admit. I’ve dabbled with styling the Shadow DOM but only academically, so to speak. This is the first time I’ve attempted to style a web component with Shadow DOM parts on something being used in production. If you’re new to Shadow DOM, the basic idea is that it prevents styles and scripts from “leaking” in or out of the component. This is a big selling point of web components because it’s so darn easy to drop them into any project and have them “just” work. But how do you style a third-party web component? It depends on how the developer sets things up because there are ways to allow styles to “pierce” through the Shadow DOM. Ollie Williams wrote “Styling in the Shadow DOM With CSS Shadow Parts” for us a while back and it was super helpful in pointing me in the right direction. Chris has one, too. A few other more articles I used: “Options for styling web components” (Nolan Lawson, super well done!) “Styling web components” (Chris Ferdinandi) “Styling” (webcomponents.guide) First off, I knew I could select the <baseline-status> element directly without any classes, IDs, or other attributes: baseline-status { /* Styles! */ } I peeked at the script’s source code to see what I was working with. I had a few light styles I could use right away on the type selector: baseline-status { background: #000; border: solid 5px #f8a100; border-radius: 8px; color: #fff; display: block; margin-block-end: 1.5em; padding: .5em; } I noticed a CSS color variable in the source code that I could use in place of hard-coded values, so I redefined them and set them where needed: baseline-status { --color-text: #fff; --color-outline: var(--orange); border: solid 5px var(--color-outline); border-radius: 8px; color: var(--color-text); display: block; margin-block-end: var(--gap); padding: calc(var(--gap) / 4); } Now for a tricky part. The component’s markup looks close to this in the DOM when fully rendered: <baseline-status class="wp-block-css-tricks-baseline-status" featureid="anchor-positioning"></baseline-status> <h1>Anchor positioning</h1> <details> <summary aria-label="Baseline: Limited availability. Supported in Chrome: yes. Supported in Edge: yes. Supported in Firefox: no. Supported in Safari: no."> <baseline-icon aria-hidden="true" support="limited"></baseline-icon> <div class="baseline-status-title" aria-hidden="true"> <div>Limited availability</div> <div class="baseline-status-browsers"> <!-- Browser icons --> </div> </div> </summary><p>This feature is not Baseline because it does not work in some of the most widely-used browsers.</p><p><a href="https://github.com/web-platform-dx/web-features/blob/main/features/anchor-positioning.yml">Learn more</a></p></details> <baseline-status class="wp-block-css-tricks-baseline-status" featureid="anchor-positioning"></baseline-status> I wanted to play with the idea of hiding the <h1> element in some contexts but thought twice about it because not displaying the title only really works for Almanac content when you’re on the page for the same feature as what’s rendered in the component. Any other context and the heading is a “need” for providing context as far as what feature we’re looking at. Maybe that can be a future enhancement where the heading can be toggled on and off. Voilà Get the plugin! This is freely available in the WordPress Plugin Directory as of today! This is my very first plugin I’ve submitted to WordPress on my own behalf, so this is really exciting for me! Get the plugin Future improvements This is far from fully baked but definitely gets the job done for now. In the future it’d be nice if this thing could do a few more things: Live update: The widget does not update on the back end until the page refreshes. I’d love to see the final rendering before hitting Publish on something. I got it where typing into the text input is instantly reflected on the back end. It’s just that the component doesn’t re-render to show the update. Variations: As in “large” and “small”. Heading: Toggle to hide or show, depending on where the block is used. Baseline Status in a WordPress Block originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  11. by: Abhishek Prakash One of the issues I encountered after dual booting Windows with Linux is the missing Windows entry from the grub menu. Here's the scenario. Windows was present on the computer. I installed CachyOS alongside Windows. I selected to install the Grub bootloader that allows booting into Linux, Windows (and any other OS present on the system) along with the option to access UEFI. Only this time, Grub did not show Windows in the menu 😔 Missing Windows from Grub bootloader That was disappointing but not surprising because I am aware that this is a feature. Let me show you how you can fix this by enabling os-prober feature in Grub and then updating it. Step 1: Enable os-prober in grubGrub config file is located at /etc/default/grub. If you open it via Nano or some editor, you'll see at the end of this file that os-prober is disabled by default sighting security reasons. If you are familiar with any terminal-based text editor, use it to uncomment the line # GRUB_DISABLE_OS_PROBER=false by removing the # at the beginning of the line. However, if you are absolutely new to the command line, you can use this command in the terminal: echo "GRUB_DISABLE_OS_PROBER=false" | sudo tee -a /etc/default/grubIt will ask for your password. It should be the same account password you use to log in to the system. 🚧 When you type the password in Linux terminal, nothing is reflected on the screen. It feels as if your system is hanged, as there is no visible feedback. Don't worry. It's a security feature and most Linux terminals won't even show asterisks (*) as you enter the password. Just type it in and enter. With os-prober enabled, Grub will look for the presence of other operating systems in EFI folder and will add them in the bootloader menu. There is one little problem. The config changes won't take place unless you update grub. Step 2: Update grubOn Ubuntu and some other distributions, there is a dedicated command to update grub: sudo update-grubHowever, on Arch and some other distributions, you'll end up with update-grub command not found error. That's because update-grub is not a standard command. It is just la stub for this command: sudo grub-mkconfig -o /boot/grub/grub.cfgRun the above command if your system doesn't have update-grub. It should show an output like this: And as you can see in the above output, grub is probing for other OS and has found Windows boot manager. This is an indication that when you reboot the system, grub should show Windows in the available option. And Windows is back in Grub Still no Windows boot manager?See, this method only works when dual boot succeeded properly and you have all EFI settings located in the same folder under the same ESP partition. If that's not the case, you could try accessing the UEFI settings, go to boot order. Windows boot manager should be present there and if you move it up the order, you can boot from it into Windows. This is not the most convenient option, I understand but it's a workaround until you figure out why Grub bootloader could not see Windows. 💡 Bonus tip: The time synchronization issueSince we are discussing dual booting Windows and Linux, let me share another potential issue you'll encounter. You'll notice that system time changes when you switch between Windows and Linux. You can fix it, if you want. [Solved] Wrong Time in Windows 10 After Dual Boot With Linux If you dual boot Windows and Linux, you’ll notice that often one of them shows incorrect time. Here’s why that happens and what you can do to fix it. It's FOSSAbhishek Prakash 💬 I hope this little trick helps you get a better dual booting experience. Let me know in the comments if you were able to get Windows back in Grub.
  12. by: Zainab Sutarwala Wed, 05 Feb 2025 11:24:00 +0000 Programmers have to spend a huge amount of their time on the computer and because of these long hours of mouse usage, they develop Repetitive Strain Injuries. And using a standard mouse can aggravate these injuries. The computer mouse puts your palm in a neutral position is the best way of alleviating such problems – enter trackball or vertical mice. With several options available in the market right now, a programmer can be a little confused to find the best mouse for their needs. Nothing to worry about, as this post will help you out. Also Read: 8 Best Keyboards for Programming in India Best Mouse for Programming in India Unsurprisingly, a big mouse with an ergonomic shape fits perfectly in your hand and works great with maximum sought function. Rubber grip close to the thumb and perimeters of the mouse that makes it very less slippery is recommended by some users. Here are some of the best mice for programmers to look at. 1. Logitech MX Master 3 Logitech MX Master 3 wireless mouse is one best option for a professional programmer, which is highly versatile for daily use. This mouse has an ergonomic design and is comfortable for long hours of work because of the rounded shape and thumb rest. It is good for your palm grip, though people with smaller hands may have a little trouble gripping this mouse comfortably. Logitech MX Master 3 mouse is a well-built and heavy mouse, giving it a hefty feel. It is a wireless mouse and its latency will not be noticeable for many, it is not recommended for hardcore gamers. Looking at its positive side, it provides two scroll wheels & gesture commands that make the control scheme diverse. You can set the preferred settings that depend on which program and app you are using. Pros: Comfy sculpting. Electromagnetic scroll gives precise and freewheeling motion. Works over 3 devices, and between OSs. Amazing battery life. Cons: Connectivity suffers when connected to several devices through the wireless adapter. 2. Zebronics Zeb-Transformer-M Zeb-Transformer-M optical mouse is a premium mouse, which comes with six buttons. This has a high precision sensor with a dedicated DPI switch, which will toggle over 1000, 1600, 2400, 3200 DPI. This mouse has seven breathable LED modes, a strong 1.8-meter cable, and it also comes with a quality USB connector. This mouse is available in black color and has an ergonomic design, which has a solid structure a well as quality buttons. Besides this, this product comes versed with superior quality buttons as well as high precision with gold plated USB. It is one perfect mouse available with a USB interface and sensor optical. The cable length is 1.8 meters. Pros: This mouse has seven colors that can be selected as per your need. Compact shape & ergonomic design Top-notch quality buttons and best gaming performance. Cons: Keys aren’t tactile. Packaging isn’t good. 3. Redgear A-15 Wired Gaming Mouse Redgear A-15 wired mouse provides maximum personalization with the software. This is the mouse very simple to control the dpi, and RGB preference according to your game or gaming setup. Initially, Redgear A15 seems to be the real gaming mouse. This all-black design & RGB lighting are quite appealing and offer a gamer vibe. Its build quality is amazing, with the high-grade exterior plastic casing, which feels fantastic. The sides of this mouse are covered with textured rubber, and ensuring perfect grip when taking a headshot. There are 2 programmable buttons that are on the left side, and simple to reach with the thumbs. The mouse has the best overall build quality as well as provides amazing comfort. Pros: 16.8 million customization colour option DPI settings High-grade plastic 2 programmable buttons Cons: Poor quality connection wire 4. Logitech G402 Hyperion Fury Logitech G402 Hyperion Fury wired mouse is the best-wired mouse made especially for FPS games. It is well-built, with an ergonomic shape, which is well suited for the right-handed palm and claw grip. This has a good number of programmable buttons, which include the dedicated sniper button on its side. Besides the usual left and right buttons and scroll wheel, this mouse boasts the sniper button (with one-touch DPI), DPI up & down buttons (that cycle between 4 DPI presets), and 2 programmable thumb buttons. This mouse delivers great performance and has low click latency with high polling rate. It offers a responsive and smooth gaming experience. Sadly, it is a bit heavier, and the rubber-coated cable appears a little stiff. Also, its scroll wheel is basic and does not allow for left and right tilt input and infinite scrolling. Pros: Customizable software Good ergonomics DPI on-a-fly switch Amazing button positioning Cons: A bit expensive No discrete DPI axis The scroll wheel is not much solid Not perfect for the non-FPS titles 5. Lenovo Legion M200 Lenovo Legion M200 RGB is wired from the Lenovo company. With its comfortable design, the mouse offers amazing functionality and performance at a very good price range. You can get the wired gaming mouse at a good price. Lenovo Legion M200 Mouse is made for amateur and beginners PC gamers. With the comfortable ambidextrous pattern, it is quite affordable but provides uncompromising features and performance. Legion comes with 5-button design and 2400 DPI that have four levels of the DPI switch, 7 backlight colors, and braided cable. It’s simple to use as well as set up without extra complicated software. It has an adjustable four-level DPI setting; 30” per second movement speed; 500 fps frame rate; seven colors circulating backlight. Pros: RGB lights are great Ergonomic design Cable quality is amazing Build Quality is perfect Get 1 Year Warranty Grip is perfect Cons: Customization is not there. A bit bulky 6. HP 150 Truly Ambidextrous Straddling the gaming world and productivity are two important things that HP 150 Truly Ambidextrous Wireless Mouse does great. It is one of the most comfortable, satisfying, and luxurious mice with the smart leatherette sides that elevate the experience. It has an elegant ergonomic design that gives you complete comfort while using it for long hours. It feels quite natural, you will forget that you are holding a mouse. The mouse has 3 buttons. Right-click, left-click, and center clicks on the dual-function wheel. You have got all control that you want in just one fingertip. With a 1600 DPI sensor, the mouse works great on any surface with great accuracy. Just stash this mouse with your laptop in your bag and you are set to go. Pros: Looks great Comfortable Great click action Cons: Wireless charging The scroll wheel feels a bit light 7. Lenovo 530 Lenovo 530 Wireless Mouse is perfect for controlling your PC easily at any time and any place. This provides cordless convenience with a 2.4 GHz nano-receiver, which means you may seamlessly scroll without any clutter in the area. This has a contoured design and soft-touch finish to get complete comfort. It is a plug-n-play device, with cordless convenience it has a 2.4 GHz wireless feature through a nano USB receiver. Get all-day comfort & maximum ergonomics with the contoured and unique design and soft and durable finish. Lenovo 530 Mouse is just a perfect mouse-of-choice. Pros: Best travel mouse for work or for greater control. Available in five different color variants. Long 12month battery life. Can work with another brand laptop. Cons: The dual-tone finish makes it look a bit cheap. 8. Dell MS116 If you are looking for an affordable and good feature mouse for daily work, which is comfortable for use, get a Dell MS116 mouse. It is one perfect mouse that you are searching for. Dell mouse has optical LED tracking & supports a decent 1000 dpi, and making this mouse perform at a reasonable speed & accuracy. It is the wired mouse that has got batteries to work smoothly. Dell MS116 mouse comes in a black matte finish. The mouse is pretty compatible with any other device – no matter whether it is Mac OS or Windows, or Linux. This mouse is quite affordable. There aren’t many products in this price range, which give you such a good accuracy as well as performance, as Dell MS116 optical mouse does. Pros: The grip of this mouse is highly comfortable to users when working. Available at an affordable rate. The durability of the product is great long. Cons: The Bluetooth facility isn’t given. Not made for playing games. The warranty period of the mouse is short. Final Words Preferably, the highly convenient and handy mouse for programmers will be one with the best comfort and designs that fit your hand and hold design perfectly. A flawless device won’t just make you very less worn out yet suggest lesser threats to your overall health in the future. Among different mouse styles with mild improvements, certainly, you will find quite fascinating ones, like the upright controller or trackball. Despite how unusual they appear, these layouts have verified benefits and seem to be easy to get used to. These are some top-rated mouse for programmers. The post 8 Best Mouse for Programming in India 2025 appeared first on The Crazy Programmer.
  13. by: Neeraj Mishra Wed, 05 Feb 2025 10:05:00 +0000 Python has some of the most frequently used frameworks that have been chosen due to the simplicity of development and minimal learning curve. Based on the latest Stack Overflow 2020 poll, 66 percent of programmers are using the two of the most popular Python web frameworks, Django and Flask, and would want to continue using them. In addition, according to the study, Django is the fourth most desired web framework. If you are looking to hire Python programmers, you should know that Python frameworks are in high demand since Python is an open-source software being used and produced by software developers worldwide. It is easily customizable and adheres to industry standards for interaction, automating programming, and certification. Python has emerged as an excellent platform. It has been a popular choice among developers, and with the addition of numerous frameworks, it is quickly becoming a market leader. Python is also gaining popularity due to significant qualities such as functionality, originality, and general curiosity that have emerged as reasonably important factors. However, we are all aware of the fact that frameworks may aid in the development cycle by making it much easier for developers to work on a project. Top 5 Python Frameworks 1. Django It is the most popular comprehensive python framework, ranking in the top ten web frameworks of the year 2020. It is easily accessible software that you can use, with a plethora of characteristics that makes website building much easier. It comprises URL navigation, dbms configuration management and paradigm upgrades, as well as authentication, virtual server integration, a template engine, and an object-relational mapper (ORM). It helps in the creation and delivery of highly scalable, quick, and resilient online applications. It has extensive documentation and customer assistance. It also includes pre-built packages as well as libraries that are ready to use. 2. Pyramid Just another comprehensive Python framework is Pyramid whose main objective is to make creating programs of any complexity straightforward. It offers significant testing assistance as well as versatile development tools. Pyramid’s function decorators make it simple to send Ajax queries. It employs a versatile method to secure authorisation and verification. With simple URL creation, fragment shaders, propositions and formatting, and material requirements, it makes web application development and implementation much more enjoyable, reliable, and efficient. Quality measurement, infrastructure security, structuring, extensive documentation, and HTML structure creation are also included in the pyramid. 3. Flask Flask is often regarded as the greatest Python microframework because it is extremely lightweight, flexible, and adaptable. It is open-source and works with Google App Engine. It has enhanced and secured support for cookies for building client-side connections, and makes recommendations rather than imposing any requirements. Flask has an integrated development server, HTTP request processing, and Jinja2 templating. It has a simple architecture and supports unit testing. Some of the other significant characteristics of Flask include object-relational mapping, a configurable app structure for file storage, and built-in debugging that is quick, utilises Unicode, and is completely WSGI compliant. 4. Web2Py Web2Py is another open and free, comprehensive based on Python web interface that is mostly put to use for the rapid creation of massively efficient, secured, and modular database-driven internet applications. With the help of a web browser, a Standard Query Language database, and a web-based frontend, this framework conveniently manages the Python application development method. Clients can also use web browsers to develop, organise, administer, and release web-based applications. If something goes wrong, it features an in-built mechanism for releasing tickets. Web2Py is highly customizable, which means it is fully compatible and has a large community of supporters. 5. CubicWeb It is an LGPL-licensed semantic python custom web foundation. This framework aids developers in the construction of web apps by reusing cube-based elements. It adheres to an object-oriented structure, which makes the software more efficient, easier to understand and analyze. It facilitates data-related inquiries by incorporating RQL, or Relational Query Language, that offers a concise language for relationship questions, manages datasets, and views aspects and connections. Other qualities include using semi-automatic techniques for Html / css generation, security procedures and Semantic Web Framework, Administrative databases, and storing backup reliability. Conclusion Python is seeing an unexpected rising trend. And there is no hint of a slowdown soon. Python is likely to exceed Java and C# in the next few years, indicating that there is much more to come. Many of today’s leading technology businesses, like Google, Netflix, and Instagram, use Python frameworks for web development. Because Python lacks the built-in functionality necessary to expedite bespoke web application development, many programmers prefer Python’s extensive selection of frameworks to cope with implementation nuances. Rather than creating the same software for each project, Python developers may use the framework’s fully prepared modules. When selecting a framework for any development, take into consideration the features and functions that it offers. The requirements and the capacity of a framework to meet those criteria will decide the success of your project. In this article, we have covered the essential aspects of the top 5 Python frameworks, that will assist you in determining the requirement of any of these frameworks when working on a web development project. The post Top 5 Python Frameworks in 2025 appeared first on The Crazy Programmer.
  14. By: Janus Atienza Tue, 04 Feb 2025 15:52:14 +0000 If you’ve ever thought about making games but assumed Linux wasn’t the right platform for it, think again! While Windows and macOS might dominate the game development scene, Linux has quietly built up an impressive toolkit for developers. Whether you’re an indie creator looking for open-source flexibility or a studio considering Linux support, the ecosystem has come a long way. From powerful game engines to robust development tools, Linux offers everything you need to build and test games. In this article, we’ll break down why Linux is worth considering, the best tools available, and how you can get started. Why Choose Linux for Game Development? If you’re wondering why anyone would develop games on Linux instead of Windows or macOS, the answer is simple: freedom, flexibility, and performance. First off, Linux is open-source, which means you aren’t locked into a specific ecosystem. You can customize your entire development environment, from the desktop interface to the compiler settings. No forced updates, no bloated background processes eating up resources — just an efficient workspace built exactly how you like it. Then there’s the stability and performance factor. Unlike Windows, which can sometimes feel sluggish with unnecessary background tasks, Linux runs lean. This is especially useful when you’re working with heavy game engines or compiling large projects. It’s why so many servers and supercomputers use Linux — it just works. Another big plus? Cost savings. Everything you need — IDEs, compilers, game engines, and creative tools — can be found for free. Instead of shelling out for expensive software licenses, you can reinvest that money into your project. And let’s not forget about growing industry support. Unity, Unreal Engine, and Godot all support Linux, and with platforms like Steam Deck running Linux-based SteamOS, game development for Linux is more relevant than ever. Sure, it’s not as mainstream as Windows, but if you’re looking for a powerful, flexible, and budget-friendly development setup, Linux is definitely worth considering. Best Game Engines for Linux If you’re developing games on Linux, you’ll be happy to know that several powerful game engines fully support it. Here are some of the best options: 1. Unity – The Industry Standard Unity is one of the most popular game engines out there, and yes, it supports Linux. The Unity Editor runs on Linux, though it’s still considered in “preview” mode. However, many game development companies like RetroStyle Games successfully use it for 2D and 3D game development. Plus, you can build games for multiple platforms, including Windows, macOS, mobile, and even consoles — all from Linux. 2. Unreal Engine – AAA-Quality Development If you’re aiming for high-end graphics, Unreal Engine is a great choice. It officially supports Linux, and while the Linux version of the editor might not be as polished as the Windows one, it still gets the job done. Unreal’s powerful rendering and blueprint system make it a top pick for ambitious projects. 3. Godot – The Open-Source Powerhouse If you love open-source software, Godot is a dream come true. It’s completely free, lightweight, and optimized for Linux. The engine supports both 2D and 3D game development and has its scripting language (GDScript) that’s easy to learn. Plus, since Godot itself is open-source, you can tweak the engine however you like. 4. Other Notable Mentions Defold – A lightweight engine with strong 2D capabilities. Love2D – Perfect for simple 2D games using Lua scripting. Stride – A promising C#-based open-source engine. Essential Tools for Linux Game Development Once you’ve picked your game engine, you’ll need the right tools to bring your game to life. Luckily, Linux has everything you need, from coding and design to audio and version control. 1. Code Editors &amp;amp; IDEs If you’re writing code, you need a solid editor. VS Code is a favorite among game developers, with great support for C#, Python, and other languages. If you prefer something more powerful, JetBrains Rider is a top-tier choice for Unity developers. For those who like minimalism, Vim or Neovim can be customized to perfection. 2. Graphics &amp;amp; Animation Tools Linux has some fantastic tools for art and animation. Blender is the go-to for 3D modeling and animation, while Krita and GIMP are excellent for 2D art and textures. If you’re working with pixel art, Aseprite (open-source version) is a fantastic option. 3. Audio Tools For sound effects and music, LMMS (like FL Studio but free) and Ardour (a powerful DAW) are solid choices. If you just need basic sound editing, Audacity is a lightweight but effective tool. 4. Version Control You don’t want to lose hours of work due to a crash. That’s where Git comes in. You can use GitHub, GitLab, or Bitbucket to store your project, collaborate with teammates, and roll back to previous versions when needed. With these tools, you’ll have everything you need to code, design, animate, and refine your game — all within Linux. And the best part? Most of them are free and open-source! Setting Up a Linux Development Environment Getting your Linux system ready for game development isn’t as complicated as it sounds. In fact, once you’ve set it up, you’ll have a lightweight, stable, and efficient workspace that’s perfect for coding, designing, and testing your game. First step: Pick the Right Linux Distro: Not all Linux distributions (distros) are built the same, so choosing the right one can save you a lot of headaches. If you want ease of use, go with Ubuntu or Pop!_OS — both have great driver support and a massive community for troubleshooting. If you prefer cutting-edge software, Manjaro or Fedora are solid picks. Second step: Install Essential Libraries &amp;amp; Dependencies: Depending on your game engine, you may need to install extra libraries. For example, if you’re using Unity, you’ll want Mono and .NET SDK. Unreal Engine requires Clang and some development packages. Most of these can be installed easily via the package manager: sudo apt install build-essential git cmake For Arch-based distros, you’d use: sudo pacman -S base-devel git cmake Third step: Set Up Your Game Engine: Most popular engines work on Linux, but the setup varies: Unity: Download the Unity Hub (Linux version) and install the editor. Unreal Engine: Requires compiling from source via GitHub. Godot: Just download the binary, and you’re ready to go. Fourth step: Configure Development Tools: Install VS Code or JetBrains Rider for coding. Get Blender, Krita, or GIMP for custom 3D game art solutions. Set up Git for version control. Building & Testing Games on Linux Once you’ve got your game up and running in the engine, it’s time to build and test it. The good news? Linux makes this process smooth — especially if you’re targeting multiple platforms. 1. Compiling Your Game Most game engines handle the build process automatically, but if you&#039;re using a custom engine or working with compiled languages like C++, you’ll need a good build system. CMake and Make are commonly used for managing builds, while GCC and Clang are solid compilers for performance-heavy games. To compile, you’d typically run: cmake . make ./yourgame If you&#039;re working with Unity or Unreal, the built-in export tools will package your game for Linux, Windows, and more. 2. Performance Optimization Linux is great for debugging because it doesn’t have as many background processes eating up resources. To monitor performance, you can use: htop – For checking CPU and memory usage. glxinfo | grep &quot;OpenGL version&quot; – To verify your GPU drivers. Vulkan tools – If your game uses Vulkan for rendering. 3. Testing Across Different Hardware &amp;amp; Distros Not all Linux systems are the same, so it’s a good idea to test your game on multiple distros. Tools like Flatpak and AppImage help create portable builds that work across different Linux versions. If you&#039;re planning to distribute on Steam its Proton compatibility layer can help test how well your game runs. Challenges & Limitations While Linux is a great platform for game development, it isn’t without its challenges. If you’re coming from Windows or macOS, you might run into a few roadblocks — but nothing that can’t be worked around. Some industry-standard tools, like Adobe Photoshop, Autodesk Maya, and certain middleware, don’t have native Linux versions. Luckily, there are solid alternatives like GIMP, Krita, and Blender, but if you absolutely need a Windows-only tool, Wine or a virtual machine might be your best bet. While Linux has come a long way with hardware support, GPU drivers can still be tricky. NVIDIA’s proprietary drivers work well but sometimes require extra setup, while AMD’s open-source drivers are generally more stable but may lag in some optimizations. If you’re using Vulkan, make sure your drivers are up to date for the best performance. Linux gaming has grown, especially with Steam Deck and Proton, but it’s still a niche market. If you’re planning to sell a game, Windows and consoles should be your priority — Linux can be a nice bonus, but not the main target unless you’re making something for the open-source community. Despite these challenges, many developers like RetroStyle Games successfully create games on Linux. The key is finding the right workflow and tools that work for you. And with the growing support from game engines and platforms, Linux game development is only getting better! Conclusion So, is Linux a good choice for game development? Absolutely — but with some caveats. If you value customization, performance, and open-source tools, Linux gives you everything you need to build amazing games. Plus, with engines like Unity, Unreal, and Godot supporting Linux, developing on this platform is more viable than ever. That said, it isn’t all smooth sailing. You might have to tweak drivers, find alternatives to proprietary software, and troubleshoot compatibility issues. But if you’re willing to put in the effort, Linux rewards you with a fast, stable, and distraction-free development environment. At the end of the day, whether Linux is right for you depends on your workflow and project needs. If you’re curious, why not set up a test environment and give it a shot? You might be surprised at how much you like it! The post Game Development on Linux appeared first on Unixmen.
  15. by: Chirag Manghnani Tue, 04 Feb 2025 11:51:00 +0000 Wondering how much is a Software Engineer salary in India? You’re at the right stop, and we’ll help you get your mind clear on the scope of the Software Engineers demand in India. Besides, you’ll also understand how much average a Software Engineer makes based on different locations and work experience. Who is a Software Engineer or Developer? Image Source A software engineer plays an essential role in the field of software design and development. The developer is usually the one who helps create the ways that a software design team works. The software developer may collaborate with the creator to integrate the program’s various roles into a single structure. In addition, the engineer interacts with programmers and codecs to map different programming activities and smaller functions, merged in more significant, running programs and new applications for current applications. Who can be a Software Engineer? An individual must typically have a Bachelor’s in information technology, computer science, or a similar field to work as a software engineer. Many organizations choose applicants for this position that can demonstrate practical programming and coding expertise. Generally, to become a Software Engineer, one needs the following: A bachelor’s degree in Computer Engineering/Computer Science/Information Technology Understanding of programming languages such as JAVA or Python An acquaintance of high school mathematics  Skills required to become an expert in Software Engineer are: Python  Java C ++ Databases such as Oracle and MySQL Basic networking concepts These skills will help an individual grow their career in the field of Software Engineers or Developers. However, one should also be familiar with the following given skills to excel and stay updated in the field of Software Engineering: Object-oriented Design or OOD Debugging a program Testing Software  Coding in modern languages such as Ruby, R and Go Android Development HTML, CSS, and JavaScript Artificial Intelligence (AI) Software Engineer Salary in India According to the latest statistics from Indeed, a Software Engineer gets an average annual pay of INR 4,38,090 in India. For the past 36 months, salary figures are based on 5,031 payrolls privately submitted to India to staff and customers of software engineers through past and current job ads. A software engineer’s average period is shorter than one year.  Besides, as per the Payscale Report, the average Software Engineer salary in India is around INR 5,23,770. So, we can conclude on this basis that the average salary of a Software Engineer lies somewhere between INR 4 to 6 lacs.  Software Engineer Salary in India based on Experience A software engineer at entry-level with less than one year of experience should expect an average net salary of INR 4 lacs based on a gross compensation of 2,377 wages (including tips, incentives, and overtime work). A candidate with 1-4 years of experience as a software developer receives an annual salary of INR 5-6 lacs based on approximately 15,0000 salaries in India in different locations. A 5-9-year midcareer software engineer receives an overall amount of INR 88,492 payout based on 3,417 salaries. An advanced software engineer with 10 to 19 years ‘ experience will get an annual gross salary of INR 1,507,360. For the last 20 years of Software Engineers’ service, they can earn a cumulative average of INR 8813,892. Check out the Payscale graph of Software Engineer based on the work experience in India: Top Employer Companies Salary Offering to Software Engineers in India Company Annual Salary Capgemini INR 331k Tech Mahindra Ltd INR 382k HCL Technologies INR 388k Infosys Ltd INR 412k Tata Consultancy Services Ltd INR 431k Accenture INR 447k Accenture Technology Solutions INR 455k Cisco Systems Inc. INR 1M The Top company names offering jobs to Software Engineers in India are Tata Consultancy Services Limited, HCL Technologies Ltd., and Tech Mahindra Ltd. The average salary at Cisco Systems Inc is INR 1.246.679, and the recorded wages are highest. Accenture Technology Solutions and Accenture’s Software Engineers earn about INR 454,933 and 446,511, respectively, and are other companies that provide high salaries for their position. At about INR 330,717, Capgemini pays the lowest. Tech Mahindra Ltd and HCL Technologies Ltd. produce the lower ends of the scale pay around INR 3,82,426 and 3,87,826. Software Engineer Salary in India based on Popular Skills Skill Average Salary Python INR 6,24,306 Java INR 5,69,020 JavaScript INR 5,31,758 C# INR 5,03,781 SQL INR 4,95,382 Python, Java, and JavaScript skills are aligned with a higher overall wage. On the other hand, C# Programming Language and SQL skills cost less than the average rate.  Software Engineer Salary in India based on Location Location Average Salary Bangalore INR 5,74,834 Chennai INR 4,51,541 Hyderabad INR 5,14,290 Pune INR 4,84,030 Mumbai INR 4,66,004 Gurgaon INR 6,53,951 Haryana receives an average of 26.6% higher than the national average workers of Software Engineer in the Gurgaon job category. In Bangalore, Karnataka (16.6% more), and Hyderabad, Andhra Pradesh (3.6 percent more), these work types also indicate higher incomes than average. In Noida, Uttar Pradesh, Mumbai, Maharashtra, and Chennai, Tamil Nadu, the lowest salaries were found (6.6% lower). The lowest salaries are 4.7% lower. Software Engineer Salary in India based on Roles Role Average Salary Senior Software Engineer INR 486k – 2m Software Developer INR 211k – 1m Sr. Software Engineer / Developer / Programmer INR 426k – 2m Team Leader, IT INR 585k – 2m Information Technology (IT) Consultant INR 389k – 2m Software Engineer / Developer / Programmer INR 219k – 1m Web Developer INR 123k – 776k Associate Software Engineer INR 238k – 1m Lead Software Engineer INR 770k – 3m Java Developer INR 198k – 1m FAQs related to Software Engineering Jobs and Salaries in India Q1. How much do Software Engineer employees make? The average salary for jobs in software engineering ranges from 5 to 63 lacs based on 6619 profiles.  Q2. What is the highest salary offered as Software Engineer? Highest reported salary offered to the Software Engineer is ₹145.8lakhs. The top 10% of employees earn more than ₹26.9lakhs per year. The top 1% earn more than a whopping ₹63.0lakhs per year. Q3. What is the median salary offered as Software Engineer? The average salary estimated so far from wage profiles is approximately 16.4 lacs per year. Q.4 What are the most common skills required as a Software Engineer? The most common skills required for Software Engineering are Python, Java, JavaScript, C#, C, and SQL. Q5. What are the highest paying jobs as a Software Engineer? The top 5 highest paying jobs as Software Engineer with reported salaries are: Senior Sde – ₹50.4lakhs per year Sde 3 – ₹37.5lakhs per year Sde Iii – ₹33.2lakhs per year Chief Software Engineer – ₹32.6lakhs per year Staff Software Test Engineer – ₹30.6lakhs per year Conclusion In India, the pay of Software Engineers is one of the country’s most giant packs. How much you value will depend on your abilities, your background, and the community you live in. The Software Engineer salary in India depends on so many factors that we’ve listed in this post. Based on your expertise level and your work experience, you can extract as high CTC per annum in India as in other countries. Cheers to Software Engineers! The post Software Engineer Salary in India 2025 appeared first on The Crazy Programmer.
  16. by: Abhishek Kumar From the title, you might be thinking: yet another clickbait post. But I mean it when I say this, ArmSoM has truly delivered something special. ArmSoM, yet again, has sent us their Compute Module 5 (CM5) with its IO board for review. Last time, I tested and reviewed their AIM7 board, and my head was blown by its sheer performance. With an RK3588 SoC, 8GB of RAM, and 32GB of storage, it was a beast. This time around, we’re looking at the CM5, powered by the RK3576, a slight step down from the RK3588 but still impressive. It comes with 8 GB of RAM (though a 16 GB version is available) and 64 GB of onboard eMMC storage. On paper, it’s shaping up to be a serious contender in the world of compute modules. In this review, I’ll walk you through its hardware specifications, software support, benchmarks, AI capabilities, and my personal thoughts. Let’s dive in! CM5 Specifications Source: ArmSoM The ArmSoM Compute Module 5 is a compact powerhouse built around the RK3576 SoC, an octa-core processor that’s both fast and efficient. With support up to 16GB of LPDDR5 RAM and up to 128GB of onboard eMMC storage, it offers twice the memory and storage options of the Raspberry Pi CM4. What makes it even better? It uses the same 100-pin connector as the CM4, making it compatible with Raspberry Pi IO boards. Plus, it supports 4K@120fps video output, giving you ultra-smooth visuals for high-resolution displays. Specification ArmSoM CM5 Processor RK3576 SoC CPU Architecture Quad-core ARM Cortex-A72 & Quad-core Cortex-A53 GPU ARM Mali G52 MC3 GPU Memory Up to 16GB LPDDR5 Storage eMMC storage (optional capacities) Display Output 1x HDMI 2.1, 1x DP Video Resolution Supports 4K@120fps Network Interface 1x Gigabit Ethernet port USB Ports 1x USB3.0,1x USB2.0 GPIO 40-pin GPIO Expandability 2x PCIe/SATA/USB 3.0 SS Camera Interface 1x 4-lane MIPI CSI, 1x 2-lane MIPI CSI Display Interface 1x 4-lane MIPI DSI Power Input 5V Dimensions 55mm x 40mm Operating System Support Debian, Android, Ubuntu, etc. CM5-IO board Specifications Source: ArmSoM The CM5-IO board is designed to make the most of the CM5 module. It features an HDMI output for 4K displays, four USB 3.0 ports for peripherals, and a Gigabit Ethernet port with PoE support. There’s also an SD Card slot and an M.2 slot for adding fast storage or PCIe devices. With dual MIPI CSI camera interfaces and a 40-pin GPIO header, it’s perfect for projects that demand flexibility. It’s compact, functional, and pairs seamlessly with the CM5 module to deliver a complete development platform. Specifications 1x HDMI output 4x USB 3.0 Type-A Gigabit Ethernet RJ45 with PoE support Firmware flashing and device mode via USB Type-C GPIO: 40-PIN header Power connector: DC Barrel jack for 12V power input Expansion: M.2 (M-key, supports PCIe), microSD MIPI DSI: 1x 4-lane MIPI DSI, supports up to 4K@60fps (x4) MIPI CSI0: 1x 4-lane MIPI CSI, each lane up to 2.5Gbps MIPI CSI1: 1x 2-lane MIPI CSI, each lane up to 2.5Gbps Others: HPOUT, FAN, VRTC Dimensions: 100 x 80 x 29 mm (3.94 x 3.15 x 1.14 inches) Unboxing and first impressionThe CM5 and its IO board arrived fully assembled, tucked neatly inside a sturdy, no-nonsense package. While the box wasn’t flashy, it did its job well, everything was secure and free of unnecessary fluff. Sorry for the potato looking image quality The first thing I noticed was the compactness of the CM5 module. It’s small, yet it feels solid in hand, like it means business. Looking closely, you can immediately spot the essentials: the RK3576 SoC sitting at the heart of the module, flanked by the eMMC storage chip and LPDDR5 RAM. The layout is efficient and clean, with every component neatly placed. Even the tiny antenna connectors for Bluetooth and WiFi are exposed, ready to connect to external antennas for better wireless performance. Flipping it over, the 100-pin connector on the back stands out. The CM5 is designed to work seamlessly with Raspberry Pi IO boards, making it an excellent choice for anyone looking to upgrade their Pi-based projects. ArmSoM CM5 supports Raspberry Pi IO board | Source: ArmSoM The IO board, which came paired with the module, is equally impressive. It’s larger than the CM5 itself but just as well-built. Ports and connectors are thoughtfully arranged, from the HDMI output and USB 3.0 ports and 40-pin GPIO header. and don't forget that this IO board also has an M.2 slot unlike Raspberry Pi 500, which came in news with its unpopulated M.2 slot. Setting it upGetting started with the CM5 was refreshingly simple. The module slid perfectly into the IO board, just look for the markings on the board. And to my surprise, this time I didn't have to rely on other sources, as ArmSoM has provided a great documentation for setup and links to all the OS images. OS Installation & first bootIf you are coming from Raspberry Pi ecosystem, you might find it difficult to flash OS images into CM5 but during my experience with AIM7, it was an ease for me. RKDevTool is required to flash an OS image in Rockchip devices. Flashing Android 14 image to CM5 using RKDevTool DebianThe CM5 came pre-installed with ArmSoM’s custom Debian image, which saved me the hassle of flashing an OS right out of the box. When I powered it on, the board booted into Debian in under 30 seconds, thanks to the onboard eMMC storage. However, there was a small hiccup: the default locale was set to Chinese. While this threw me off for a moment, Google Translate came to the rescue. I’ve covered a detailed guide on how to change locales in Debian. Once the language barrier was out of the way, everything ran smoothly. The system felt responsive, and the ArmSoM image came with just the right balance of pre-installed utilities to get started without feeling bloated. Android 14ArmSoM doesn’t just stop at Debian; they also provide an Android 14 image for the CM5, and I couldn’t resist the idea of running Android on this tiny yet powerful board. Installing it was straightforward, though slightly different from the usual process. Instead of burning the image to an SD card or eMMC, you need to flash it as firmware using the RKDevTool utility. The process was smooth, and once the flashing was complete, I rebooted the system. I was greeted with the Android boot animation, and in no time, the familiar Android home screen appeared. Interestingly, the display was in portrait mode, which felt a bit odd on my monitor but didn’t hinder functionality. The Android image was barebones - just the essentials, nothing more. I scrolled through the settings, checked out the "About" section, and explored the file manager. It felt snappy and responsive, but that was about it. One noticeable omission was the absence of the Google Play Store. If you’re keen on having it, you can install it using Open GApps Project. However, since I was pressed for time, I skipped that step and instead sideloaded Geekbench for Android from APKMirror to get straight to benchmarking. Performance testingNow comes the most awaited section, the benchmarks! It’s one thing to talk about specs on paper, but how does the CM5 actually perform in real-world tests? To keep things simple, here’s what I tested: Geekbench Performance: Evaluating CPU and overall system power. AI Capabilities: Testing the NPU for AI-related workloads. YouTube Playback: Checking video performance and hardware acceleration. 📋 The Geekbench test was conducted using the Geekbench Android app. For AI testing, I used the pre-installed Debian image. YouTube performance was tested in the Chromium browser inside Debian as well, with hardware acceleration enabled. Geekbench resultsThe Geekbench results gave us a good glimpse of the CM5's raw power. With a Single-Core Score of 321 and a Multi-Core Score of 1261, the CM5 delivers solid performance. The single-core score of 321 might seem modest, but it’s adequate for basic tasks like file compression (54.9 MB/sec) and lightweight navigation (2.34 routes/sec). If you’re planning to use the CM5 for simple applications, like hosting a lightweight server or running scripts, this score is sufficient. However, for tasks that demand high single-threaded performance like intensive image processing or compiling large programs, you might notice some lag. The multi-core score of 1261 is where the RK3576 shines. This score reflects the strength of its eight cores working together, making it ideal for multitasking and workloads that can leverage parallel processing. AI workloadThe CM5’s 6 TOPS NPU is designed to handle AI inference efficiently, just like its big sibling, the AIM7. It supports RKNN-LLM, a toolkit that enables deploying lightweight language models on Rockchip hardware with optimized performance. Source: RKNN-LLM To test its capabilities, I ran the TinyLLAMA model with 1.1 billion parameters, and the results were consistent with the AIM7. The NPU achieved a throughput of 13 or 14 tokens /second , showcasing its ability to handle lightweight AI workloads with ease. With NPU handling AI tasks, the GPU stays free for other workloads. This makes CM5 ideal for edge AI projects where efficient resource use is key. YouTube playbackYouTube playback is my favorite test for any SBC because it’s where many boards, including the Raspberry Pi (even the Pi 5), still stumble. Playing 1080p consistently is a challenge for most, and 4K? Forget about it. But the CM5 completely shattered my expectations. Running Chromium on Debian with hardware acceleration enabled, I tested videos at 1080p, 1440p, and 4K. The CM5 didn’t just handle it, it crushed it. Even at 4K resolution, the playback was smooth, with less than 10 dropped frames throughout the video. That’s right, 4K on an SBC, and it worked beautifully. What’s more impressive is how efficiently it handled the load. Thanks to hardware decoding, CPU usage stayed low, leaving the board cool and responsive. I even recorded a video of the CM5 playing a 4K YouTube video to showcase its capabilities. If you’re considering the CM5 for a media server or as a replacement for your Android TV box, this performance makes it an easy choice. It’s rare to see this level of multimedia smoothness on an SBC, and the CM5 delivers it effortlessly. What about Raspberry Pi CM5?I don't want to sugarcoat it, the Raspberry Pi CM5 outperforms the ArmSoM CM5 in raw processing power, and the benchmarks make that crystal clear. In single-core performance, the Raspberry Pi CM5 delivers a stellar 804 compared to the ArmSoM CM5’s modest 321. That’s a difference of 39.9%, and it’s noticeable in tasks that rely on single-threaded performance, like browsing, lightweight applications, or running certain server processes. The gap widens further in multi-core performance, where the Pi CM5 scores an impressive 1651, leaving the ArmSoM CM5 trailing at 1261 a 76.4% lead that makes the Pi CM5 the clear choice for CPU-intensive tasks. That said, the ArmSoM CM5 isn’t trying to play the same game. It’s built with a different focus, and its strengths lie elsewhere. The 6 TOPS NPU on the ArmSoM CM5 is a game-changer for AI workloads, allowing it to handle tasks like language models or image recognition with ease, something the Raspberry Pi CM5 lacks entirely. Final thoughtsAfter spending time with the ArmSoM CM5, it’s clear that this little board has carved out its niche. It may not outshine the Raspberry Pi CM5 in raw CPU benchmarks, but it brings its own strengths to the table. The built-in NPU, seamless 4K playback, and thoughtful design make it a compelling choice for AI-driven edge projects, media servers, or even as a replacement for an Android TV box. What impressed me most was its support for Raspberry Pi IO boards. I feel that, the ArmSoM CM5 isn’t trying to be a Raspberry Pi killer. Instead, it’s a specialist board that excels in areas where the Pi falters. As I wrap up this review, I'm also thinking about running some emulators on the CM5 to dive deeper into its GPU performance and for the fun of it. Recently, many retro game emulation videos have been popping up in my feed, and they’re tempting me to dip my toes in. If you want to see that, let me know in the comments section! 🕹️
  17. by: Chris Coyier Mon, 03 Feb 2025 17:27:05 +0000 I kinda like the idea of the “minimal” service worker. Service Workers can be pretty damn complicated and the power of them honestly makes me a little nervous. They are middlemen between the browser and the network and I can imagine really dinking that up, myself. Not to dissuade you from using them, as they can useful things no other technology can do. That’s why I like the “minimal” part. I want to understand what it’s doing extremely clearly! The less code the better. Tantek posted about that recently, with a minimal idea: That seems clearly useful. The bit about linking to an archive of the page though seems a smidge off to me. If the reason a user can’t see the page is because they are offline, a page that sends them to the Internet Archive isn’t going to work either. But I like the bit about caching and at least trying to do something. Jeremy Keith did some thinking about this back in 2018 as well: The implementation is actually just a few lines of code. A variation of it handles Tantek’s idea as well, implementing a custom offline page that could do the thing where it links off to an archive elsewhere. I’ll leave you with a couple more links. Have you heard the term LoFi? I’m not the biggest fan of the shortening of it because “Lo-fi” is a pretty established musical term not to mention “low fidelity” is useful in all sorts of contexts. But recently in web tech it refers to “Local First”. I dig the idea honestly and do see it as a place for technology (and companies that make technology) to step and really make this style of working easy. Plenty of stuff already works this way. I think of the Notes app on my phone. Those notes are always available. It doesn’t (seem to) care if I’m online or offline. If I’m online, they’ll sync up with the cloud so other devices and backups will have the latest, but if not, so be it. It better as heck work that way! And I’m glad it does, but lots of stuff on the web does not (CodePen doesn’t). But I’d like to build stuff that works that way and have it not be some huge mountain to climb. That eh, we’ll just sync later/whenever when we have network access is super non-trivial, is part of the issue. Technology could make easy/dumb choices like “last write wins”, but that tends to be dangerous data-loss territory that users don’t put up with. Instead data need to be intelligently merged, and that isn’t easy. Dropbox is multi-billion dollar company that deals with this and they admittedly don’t always have it perfect. One of the major solutions is the concept of CRDTs, which are an impressive idea to say the least, but are complex enough that most of us will gently back away. So I’ll simply leave you with A Gentle Introduction to CRDTs.
  18. by: Ryan Trimble Mon, 03 Feb 2025 15:23:37 +0000 Suppose you follow CSS feature development as closely as we do here at CSS-Tricks. In that case, you may be like me and eager to use many of these amazing tools but find browser support sometimes lagging behind what might be considered “modern” CSS (whatever that means). Even if browser vendors all have a certain feature released, users might not have the latest versions! We can certainly plan for this a number of ways: feature detection with @supports progressively enhanced designs polyfills For even extra help, we turn to build tools. Chances are, you’re already using some sort of build tool in your projects today. CSS developers are most likely familiar with CSS pre-processors (such as Sass or Less), but if you don’t know, these are tools capable of compiling many CSS files into one stylesheet. CSS pre-processors help make organizing CSS a lot easier, as you can move parts of CSS into related folders and import things as needed. Pre-processors do not just provide organizational superpowers, though. Sass gave us a crazy list of features to work with, including: extends functions loops mixins nesting variables …more, probably! For a while, this big feature set provided a means of filling gaps missing from CSS, making Sass (or whatever preprocessor you fancy) feel like a necessity when starting a new project. CSS has evolved a lot since the release of Sass — we have so many of those features in CSS today — so it doesn’t quite feel that way anymore, especially now that we have native CSS nesting and custom properties. Along with CSS pre-processors, there’s also the concept of post-processing. This type of tool usually helps transform compiled CSS in different ways, like auto-prefixing properties for different browser vendors, code minification, and more. PostCSS is the big one here, giving you tons of ways to manipulate and optimize your code, another step in the build pipeline. In many implementations I’ve seen, the build pipeline typically runs roughly like this: Generate static assets Build application files Bundle for deployment CSS is usually handled in that first part, which includes running CSS pre- and post-processors (though post-processing might also happen after Step 2). As mentioned, the continued evolution of CSS makes it less necessary for a tool such as Sass, so we might have an opportunity to save some time. Vite for CSS Awarded “Most Adopted Technology” and “Most Loved Library” from the State of JavaScript 2024 survey, Vite certainly seems to be one of the more popular build tools available. Vite is mainly used to build reactive JavaScript front-end frameworks, such as Angular, React, Svelte, and Vue (made by the same developer, of course). As the name implies, Vite is crazy fast and can be as simple or complex as you need it, and has become one of my favorite tools to work with. Vite is mostly thought of as a JavaScript tool for JavaScript projects, but you can use it without writing any JavaScript at all. Vite works with Sass, though you still need to install Sass as a dependency to include it in the build pipeline. On the other hand, Vite also automatically supports compiling CSS with no extra steps. We can organize our CSS code how we see fit, with no or very minimal configuration necessary. Let’s check that out. We will be using Node and npm to install Node packages, like Vite, as well as commands to run and build the project. If you do not have node or npm installed, please check out the download page on their website. Navigate a terminal to a safe place to create a new project, then run: npm create vite@latest The command line interface will ask a few questions, you can keep it as simple as possible by choosing Vanilla and JavaScript which will provide you with a starter template including some no-frameworks-attached HTML, CSS, and JavaScript files to help get you started. Before running other commands, open the folder in your IDE (integrated development environment, such as VSCode) of choice so that we can inspect the project files and folders. If you would like to follow along with me, delete the following files that are unnecessary for demonstration: assets/ public/ src/ .gitignore We should only have the following files left in out project folder: index.html package.json Let’s also replace the contents of index.html with an empty HTML template: <!doctype html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>CSS Only Vite Project</title> </head> <body> <!-- empty for now --> </body> </html> One last piece to set up is Vite’s dependencies, so let’s run the npm installation command: npm install A short sequence will occur in the terminal. Then we’ll see a new folder called node_modules/ and a package-lock.json file added in our file viewer. node_modules is used to house all package files installed through node package manager, and allows us to import and use installed packages throughout our applications. package-lock.json is a file usually used to make sure a development team is all using the same versions of packages and dependencies. We most likely won’t need to touch these things, but they are necessary for Node and Vite to process our code during the build. Inside the project’s root folder, we can create a styles/ folder to contain the CSS we will write. Let’s create one file to begin with, main.css, which we can use to test out Vite. ├── public/ ├── styles/ | └── main.css └──index.html In our index.html file, inside the <head> section, we can include a <link> tag pointing to the CSS file: <head> <meta charset="UTF-8" /> <link rel="icon" type="image/svg+xml" href="/vite.svg" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>CSS Only Vite Project</title> <!-- Main CSS --> <link rel="stylesheet" href="styles/main.css"> </head> Let’s add a bit of CSS to main.css: body { background: green; } It’s not much, but it’s all we’ll need at the moment! In our terminal, we can now run the Vite build command using npm: npm run build With everything linked up properly, Vite will build things based on what is available within the index.html file, including our linked CSS files. The build will be very fast, and you’ll be returned to your terminal prompt. Vite will provide a brief report, showcasing the file sizes of the compiled project. The newly generated dist/ folder is Vite’s default output directory, which we can open and see our processed files. Checking out assets/index.css (the filename will include a unique hash for cache busting), and you’ll see the code we wrote, minified here. Now that we know how to make Vite aware of our CSS, we will probably want to start writing more CSS for it to compile. As quick as Vite is with our code, constantly re-running the build command would still get very tedious. Luckily, Vite provides its own development server, which includes a live environment with hot module reloading, making changes appear instantly in the browser. We can start the Vite development server by running the following terminal command: npm run dev Vite uses the default network port 5173 for the development server. Opening the http://localhost:5137/ address in your browser will display a blank screen with a green background. Adding any HTML to the index.html or CSS to main.css, Vite will reload the page to display changes. To stop the development server, use the keyboard shortcut CTRL+C or close the terminal to kill the process. At this point, you pretty much know all you need to know about how to compile CSS files with Vite. Any CSS file you link up will be included in the built file. Organizing CSS into Cascade Layers One of the items on my 2025 CSS Wishlist is the ability to apply a cascade layer to a link tag. To me, this might be helpful to organize CSS in a meaningful ways, as well as fine control over the cascade, with the benefits cascade layers provide. Unfortunately, this is a rather difficult ask when considering the way browsers paint styles in the viewport. This type of functionality is being discussed between the CSS Working Group and TAG, but it’s unclear if it’ll move forward. With Vite as our build tool, we can replicate the concept as a way to organize our built CSS. Inside the main.css file, let’s add the @layer at-rule to set the cascade order of our layers. I’ll use a couple of layers here for this demo, but feel free to customize this setup to your needs. /* styles/main.css */ @layer reset, layouts; This is all we’ll need inside our main.css, let’s create another file for our reset. I’m a fan of my friend Mayank‘s modern CSS reset, which is available as a Node package. We can install the reset by running the following terminal command: npm install @acab/reset.css Now, we can import Mayank’s reset into our newly created reset.css file, as a cascade layer: /* styles/reset.css */ @import '@acab/reset.css' layer(reset); If there are any other reset layer stylings we want to include, we can open up another @layer reset block inside this file as well. /* styles/reset.css */ @import '@acab/reset.css' layer(reset); @layer reset { /* custom reset styles */ } This @import statement is used to pull packages from the node_modules folder. This folder is not generally available in the built, public version of a website or application, so referencing this might cause problems if not handled properly. Now that we have two files (main.css and reset.css), let’s link them up in our index.html file. Inside the <head> tag, let’s add them after <title>: <head> <meta charset="UTF-8" /> <link rel="icon" type="image/svg+xml" href="/vite.svg" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>CSS Only Vite Project</title> <link rel="stylesheet" href="styles/main.css"> <link rel="stylesheet" href="styles/reset.css"> </head> The idea here is we can add each CSS file, in the order we need them parsed. In this case, I’m planning to pull in each file named after the cascade layers setup in the main.css file. This may not work for every setup, but it is a helpful way to keep in mind the precedence of how cascade layers affect computed styles when rendered in a browser, as well as grouping similarly relevant files. Since we’re in the index.html file, we’ll add a third CSS <link> for styles/layouts.css. <head> <meta charset="UTF-8" /> <link rel="icon" type="image/svg+xml" href="/vite.svg" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>CSS Only Vite Project</title> <link rel="stylesheet" href="styles/main.css"> <link rel="stylesheet" href="styles/reset.css"> <link rel="stylesheet" href="styles/layouts.css"> </head> Create the styles/layouts.css file with the new @layer layouts declaration block, where we can add layout-specific stylings. /* styles/layouts.css */ @layer layouts { /* layouts styles */ } For some quick, easy, and awesome CSS snippets, I tend to refer to Stephanie Eckles‘ SmolCSS project. Let’s grab the “Smol intrinsic container” code and include it within the layouts cascade layer: /* styles/layouts.css */ @layer layouts { .smol-container { width: min(100% - 3rem, var(--container-max, 60ch)); margin-inline: auto; } } This powerful little, two-line container uses the CSS min() function to provide a responsive width, with margin-inline: auto; set to horizontally center itself and contain its child elements. We can also dynamically adjust the width using the --container-max custom property. Now if we re-run the build command npm run build and check the dist/ folder, our compiled CSS file should contain: Our cascade layer declarations from main.css Mayank’s CSS reset fully imported from reset.css The .smol-container class added from layouts.csss As you can see, we can get quite far with Vite as our build tool without writing any JavaScript. However, if we choose to, we can extend our build’s capabilities even further by writing just a little bit of JavaScript. Post-processing with LightningCSS Lightning CSS is a CSS parser and post-processing tool that has a lot of nice features baked into it to help with cross-compatibility among browsers and browser versions. Lightning CSS can transform a lot of modern CSS into backward-compatible styles for you. We can install Lightning CSS in our project with npm: npm install --save-dev lightningcss The --save-dev flag means the package will be installed as a development dependency, as it won’t be included with our built project. We can include it within our Vite build process, but first, we will need to write a tiny bit of JavaScript, a configuration file for Vite. Create a new file called: vite.config.mjs and inside add the following code: // vite.config.mjs export default { css: { transformer: 'lightningcss' }, build: { cssMinify: 'lightningcss' } }; Vite will now use LightningCSS to transform and minify CSS files. Now, let’s give it a test run using an oklch color. Inside main.css let’s add the following code: /* main.css */ body { background-color: oklch(51.98% 0.1768 142.5); } Then re-running the Vite build command, we can see the background-color property added in the compiled CSS: /* dist/index.css */ body { background-color: green; background-color: color(display-p3 0.216141 0.494224 0.131781); background-color: lab(46.2829% -47.5413 48.5542); } Lightning CSS converts the color white providing fallbacks available for browsers which might not support newer color types. Following the Lightning CSS documentation for using it with Vite, we can also specify browser versions to target by installing the browserslist package. Browserslist will give us a way to specify browsers by matching certain conditions (try it out online!) npm install -D browserslist Inside our vite.config.mjs file, we can configure Lightning CSS further. Let’s import the browserslist package into the Vite configuration, as well as a module from the Lightning CSS package to help us use browserlist in our config: // vite.config.mjs import browserslist from 'browserslist'; import { browserslistToTargets } from 'lightningcss'; We can add configuration settings for lightningcss, containing the browser targets based on specified browser versions to Vite’s css configuration: // vite.config.mjs import browserslist from 'browserslist'; import { browserslistToTargets } from 'lightningcss'; export default { css: { transformer: 'lightningcss', lightningcss: { targets: browserslistToTargets(browserslist('>= 0.25%')), } }, build: { cssMinify: 'lightningcss' } }; There are lots of ways to extend Lightning CSS with Vite, such as enabling specific features, excluding features we won’t need, or writing our own custom transforms. // vite.config.mjs import browserslist from 'browserslist'; import { browserslistToTargets, Features } from 'lightningcss'; export default { css: { transformer: 'lightningcss', lightningcss: { targets: browserslistToTargets(browserslist('>= 0.25%')), // Including `light-dark()` and `colors()` functions include: Features.LightDark | Features.Colors, } }, build: { cssMinify: 'lightningcss' } }; For a full list of the Lightning CSS features, check out their documentation on feature flags. Is any of this necessary? Reading through all this, you may be asking yourself if all of this is really necessary. The answer: absolutely not! But I think you can see the benefits of having access to partialized files that we can compile into unified stylesheets. I doubt I’d go to these lengths for smaller projects, however, if building something with more complexity, such as a design system, I might reach for these tools for organizing code, cross-browser compatibility, and thoroughly optimizing compiled CSS. Compiling CSS With Vite and Lightning CSS originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  19. by: Zainab Sutarwala Mon, 03 Feb 2025 14:38:00 +0000 There was a time when the companies evaluated the performance of a software engineers based on how quickly they delivered the tasks. But, 2025 is a different scenario in software development teams. Nowadays, this isn’t the only criteria. Today called developers, professional software engineers are aware of the importance of soft skills. Things such as open-mindedness, creativity, and willingness to learn something new are considered soft skills that anyone can use no matter which industry you are in. Why Are the Soft Skills Very Important?For a software engineer, soft skills are essential as it makes you a better professional. Even though you have the hard skills required for the job, you will not get hired if you lack the soft skills to help you connect with the interviewer and other people around you. With soft skills, developers and programmers are well-equipped to utilize their complex skills to the fullest extent. The soft skills of engineers and programmers affect how nicely you work with fellow people on your tech team and other teams that will positively impact your career development. Tools like JavaScript Frameworks and APIs automate the most technical processes. An example is Filestack, which allows the creation of high-performance software that handles the management of millions of files. Manually adding video, image, or file management functions to an application reliably can be an impossible task without enough tools. The developer needs, in this case, to have more than technical skills to convince the business to invest in the tools. Top Soft Skills for Software Developers1. CreativityCreativity isn’t only about artistic expression. The technical professions demand a good amount of creativity. It allows goods software engineers and programmers to solve any complex problems and find new opportunities while developing their innovative products or apps and improving the current ones. You need to think creatively and practice approaching various problems correctly. 2. PatienceSoftware development isn’t a simple feat. It is one complex effort that includes long processes. Most activities take plenty of time in agile environments, whether it is a project kick-off, project execution, deployment, testing, updates, and more. Patience is vital when you’re starting as a developer or programmer. An important person you will ever need to be patient with is with you. It would be best to give yourself sufficient time to make mistakes and fix them. If you’re patient, it becomes simple to stay patient with people around you. At times people will require more convincing. You have to do your best to “sell” your idea and approach them. 3. CommunicationCommunication is a basis of collaboration, thus crucial to the software development project. Whether you’re communicating with colleagues, clients, and managers, do not leave anybody guessing –ensure every developer in the team is on the same page about any aspect of a project. Besides traditional skills of respect, assertiveness, active listening, empathy, and conflict resolution, the Software Engineers have to develop their mastery of explaining technical information very clearly to the non-techies. The professional needs to understand what somebody wants to ask if they do not understand the software’s specific parameters. 4. ListenabilityThese soft talents are intertwined: being a good communicator and a good listener is essential. Keep in mind that everybody you deal with or communicate with deserves to be heard, and they might have information that can help you do your work efficiently. Put other distractions apart and focus totally on a person talking to you. You must keep an eye out for the nonverbal communication indicators since they will disclose a lot about what somebody says. As per the experts in the field, 93% of the communication is nonverbal. Thus you must pay close attention to what the colleagues or clients say, even though they are not saying anything. 5. TeamworkIt doesn’t matter what you plan to do. There will be a time when you need to work as a part of the team. Whether it is the team of designers, developers, programmers, or project teams, developers have to work nicely with others to succeed. Working with the whole team makes working more fun and makes people help you out in the future. You might not sometimes agree with other people in the team. However, having a difference of opinion helps to build successful companies. 6. People and Time ManagementSoftware development is about working on one project in the stipulated time frame. Usually, software developers engineers are highly involved in managing people and different projects. Thus, management is an essential soft skill for software developers. People and time management are two critical characteristics that the recruiter searches for in the software developer candidate. A software developer from various experience levels has to work well in the team and meet the time estimates. Thus, if you want to become a successful software programmer in a good company, it is essential to teach in the successful management of people and time. 7. Problem-solvingThere will be a point in your career when you face the problem. Problems can happen regularly or rarely; however, it’s inevitable. The way you handle your situations will leave a massive impact on your career and the company that you are working for. Thus, problem-solving is an essential soft skill that employers search for in their prospective candidates; therefore, more examples of problem-solving you have better will be your prospects. When approaching the new problem, view it objectively, though you did accidentally. 8. AdaptabilityAdaptability is another essential soft skill required in today’s fast-evolving world. This skill means being capable of changing with the changing environment and adjusting the course based on how this situation comes up. Employers value adaptability and can give you significant benefits in your career. 9. Self-awarenessSoftware developers must be highly confident in things they know and humble in something they don’t. So, knowing what area you require improvement is one type of true confidence. If software development is aware of their weaker sides, they will seek the proper training or mentorship from their colleagues and manager. In the majority of the cases, when most people deny that they do not know something, it is often a sign of insecurity. However, if the software developer finds himself secure and acknowledges their weaknesses, it is a sign of maturity that is one valuable skill to possess. In the same way, to be confident in things that they know is also very important. Self-confidence allows people to speak out their minds, make lesser mistakes, and face criticism. 10. Open-mindednessIf you are open-minded, you are keen to accept innovative ideas, whether they are yours or somebody else’s. Even any worst ideas will inspire something incredible if you consider them before you plan to dismiss them. More pictures you get, the more projects you will have the potential to work upon. Though not each idea you have may turn into something, you do not know what it will be until you have thought in-depth about it. It would help if you kept an open mind to new ideas from the team and your company and clients. Your clients are the people who will use your product; thus, they are the best ones to tell you about what works or what they require. Final ThoughtsAll the skills outlined here complement one another. A good skill will lead to higher collaboration & team cohesiveness. Knowing one’s strengths or weaknesses will improve your accountability skills. So, the result is a well-rounded software developer with solid potential. These soft skills mentioned in the article are the best input for a brilliant career since it gives several benefits. Suppose you wonder if you don’t have the given soft skills or find it very late to have it now. The best thing is that all these soft skills mentioned can quickly be learned or improved. In 2025, there will be a lot of resources available to help the developer with that. However, it is not very difficult to improve your soft skills. Better to start now. The post Top 10 Soft Skills for Software Developers in 2025 appeared first on The Crazy Programmer.
  20. by: Community If you're a developer or a power user, you probably understand the importance of having an efficient and organized workflow. Whenever I get to work with a Windows-based system, I really miss the terminal emulator along with the ability to quickly switch between different terminal sessions. Not to mention, sometimes I need to gather 2-3 command sessions in a single view. There are many scenarios where I need to run multiple commands simultaneously. Sure, we have the command prompt, or Windows terminal to use, but it is not enough for a similar experience. 💡 Windows 11 Terminal does provide you the ability to create multiple panes within a tab. You can use the shortcut Alt + Shift + - and Alt + Shift + + to create horizontal and vertical panes, respectively. With Linux, I had access to terminal multiplexers like tmux and screen. But, wait, there is a solution on Windows 10 and Windows 11 that can work as a terminal multiplexer and a text editor – all in one! Enter Emacs 🤩 Why Use Emacs as a Terminal Multiplexer?While I have already mentioned why we are looking at a solution like Emacs, let me give highlight some more reasons you want to try this out (apart from the fact that it is open source and super awesome): 1. Less is MoreEmacs is an all-in-one solution. It can work as a terminal, text editor, file manager, email client, calculator, a text-based web browser. All these features are packed in a 150 MB zipped file. Pretty crazy, right? You won't know all this until you give it a try! 2. Powerful CustomizationEmacs is famously customizable. You can tweak almost everything: keybindings, window layouts, themes, and even the behavior of your terminal sessions. This allows you to tailor the environment to your exact needs, providing a highly efficient experience. 3. Integrated Shell SupportEmacs allows you to open a shell session inside a buffer, and with its support for eshell, shell, you can run shell commands, manipulate files, and perform operations right alongside your text editing. 4. Flexibility with WindowsEmacs is great at handling multiple buffers in a single window. You can split your window into multiple panes (or "windows," as Emacs calls them), just like with tmux, enabling you to work on different tasks simultaneously without feeling cluttered. Using Emacs as Terminal MultiplexerNow that you know the benefits, to help you use it, let me give you a walkthrough on using Emacs as a terminal multiplexer. Step 1: Install WingetWinget comes as a part of the App Installer package. So, you need to first install the App Installer from Microsoft Store: Step 2: Install Emacs on Windows 11 or Windows 10Winget makes it super simple to install Emacs. Simply run this command: winget install emacsStep 3: Open Emacs Open Emacs from the Windows 11 Start menu. Step 4: Run shellWithin emacs, press Alt + X, and type shell and hit Enter to get the interactive user interface. 💡 In Emacs terminology, the Alt key is often referred to as M, and Ctrl is referred to as C. I have used C, and M to represent the same throughout the article. Now, with Emacs, you will realize the following benefits: A nice auto-completion system Ability to edit any previous command at a point Quickly jump between different command sessions. Autocomplete selection To see autocomplete in shell buffer, simply type in 'a' and then hit Tab, you’ll be presented with a list of options. You can select one of the options with a mouse click as shown above. 📋 If you see terms like C-s, or C-u, or Alt-x, read it as Ctrl + s/u or Alt + x. To search for previous commands and outputs, hit C-s <your-term>. This is what I referred to when I mentioned consistent keybindings for all of your workflow. Within your Emacs environment, C-s will do a forward search everywhere unless you modify it. To edit your previous commands, move your cursor to the previous command or do a quick search, make the necessary adjustments, and hit enter. If you want to open another shell, press C-u keys, and then Alt-x to open another shell. By default, this buffer will be named “Shell 2”. And you can navigate between these different shells by C-x b and Tab. Use the mouse for selection. We’ll make it more efficient in the next section. Terminal MultiplexingNow here comes the magic. If you want to create two vertical layouts, simply use the keybinding C-x + 3. Then, if you want two horizontal layouts, use the shortcut C-x + 2. For navigation to other panes, you can use your mouse or use Emacs shortcut C-x + o Auto-completion and multi-window layoutsAnother quick tip. With just one line configuration, Emacs can provide useful completions based on your action with ido-mode. Save the line below in a new .emacs file usually located at your user C:/Users/YourUser/AppData/Roaming. After saving, you don’t need to restart Emacs. (ido-mode 1)Let’s enable winner-mode as well to undo and redo multi-window layouts. Add the line below to the config file like you did above: (winner-mode 1)Finally, save this two-line configuration Simply, do Alt + x and then type eval-buffer. Now with ido-mode, you can simply switch to shell 2 buffer using C-x b 2. With winner-mode in place, if you want to get a full preview of a single pane, press C-x + 1, and then to go back to the previous layout, run winner-undoYou can save yourself time by mapping a keybinding for winner-undo and winner-redo commands. Keybindings CheatsheetHere’s a list of all keybindings we used throughout this tutorial. Keybinding Shortcut What it does C-x C-s Saves the file Alt-x Opens mini prompt to enter interactive commands C-s Forward Search C-u Alt-x Run another instance of the command C-x b Navigates between the buffer C-x 2 Splits into two horizontal layouts C-x 3 Splits into two vertical layouts C-x o Move to another pane C-x 1 Get a full view of a particular pane 💬Do you love having multiple terminal sessions as well? Let me know your thoughts in the comments! Author Info Bhuwan Mishra is a Fullstack developer, with Python and Go as his tools of choice. He takes pride in building and securing web applications, APIs, and CI/CD pipelines, as well as tuning servers for optimal performance. He also has passion for working with Kubernetes.
  21. by: Geoff Graham Fri, 31 Jan 2025 15:27:50 +0000 I often wonder what it’s like working for the Chrome team. You must get issued some sort of government-level security clearance for the latest browser builds that grants you permission to bash on them ahead of everyone else and come up with these rad demos showing off the latest features. No, I’m, not jealous, why are you asking? Totally unrelated, did you see the release notes for Chrome 133? It’s currently in beta, but the Chrome team has been publishing a slew of new articles with pretty incredible demos that are tough to ignore. I figured I’d round those up in one place. attr() for the masses! We’ve been able to use HTML attributes in CSS for some time now, but it’s been relegated to the content property and only parsed strings. <h1 data-color="orange">Some text</h1> h1::before { content: ' (Color: ' attr(data-color) ') '; } Bramus demonstrates how we can now use it on any CSS property, including custom properties, in Chrome 133. So, for example, we can take the attribute’s value and put it to use on the element’s color property: h1 { color: attr(data-color type(<color>), #fff) } This is a trite example, of course. But it helps illustrate that there are three moving pieces here: the attribute (data-color) the type (type(<color>)) the fallback value (#fff) We make up the attribute. It’s nice to have a wildcard we can insert into the markup and hook into for styling. The type() is a new deal that helps CSS know what sort of value it’s working with. If we had been working with a numeric value instead, we could ditch that in favor of something less verbose. For example, let’s say we’re using an attribute for the element’s font size: <div data-size="20">Some text</div> Now we can hook into the data-size attribute and use the assigned value to set the element’s font-size property, based in px units: h1 { color: attr(data-size px, 16); } CodePen Embed Fallback The fallback value is optional and might not be necessary depending on your use case. Scroll states in container queries! This is a mind-blowing one. If you’ve ever wanted a way to style a sticky element when it’s in a “stuck” state, then you already know how cool it is to have something like this. Adam Argyle takes the classic pattern of an alphabetical list and applies styles to the letter heading when it sticks to the top of the viewport. The same is true of elements with scroll snapping and elements that are scrolling containers. In other words, we can style elements when they are “stuck”, when they are “snapped”, and when they are “scrollable”. Quick little example that you’ll want to open in a Chromium browser: CodePen Embed Fallback The general idea (and that’s all I know for now) is that we register a container… you know, a container that we can query. We give that container a container-type that is set to the type of scrolling we’re working with. In this case, we’re working with sticky positioning where the element “sticks” to the top of the page. .sticky-nav { container-type: scroll-state; } A container can’t query itself, so that basically has to be a wrapper around the element we want to stick. Menus are a little funny because we have the <nav> element and usually stuff it with an unordered list of links. So, our <nav> can be the container we query since we’re effectively sticking an unordered list to the top of the page. <nav class="sticky-nav"> <ul> <li><a href="#">Home</a></li> <li><a href="#">About</a></li> <li><a href="#">Blog</a></li> </ul> </nav> We can put the sticky logic directly on the <nav> since it’s technically holding what gets stuck: .sticky-nav { container-type: scroll-state; /* set a scroll container query */ position: sticky; /* set sticky positioning */ top: 0; /* stick to the top of the page */ } I supposed we could use the container shorthand if we were working with multiple containers and needed to distinguish one from another with a container-name. Either way, now that we’ve defined a container, we can query it using @container! In this case, we declare the type of container we’re querying: @container scroll-state() { } And we tell it the state we’re looking for: @container scroll-state(stuck: top) { If we were working with a sticky footer instead of a menu, then we could say stuck: bottom instead. But the kicker is that once the <nav> element sticks to the top, we get to apply styles to it in the @container block, like so: .sticky-nav { border-radius: 12px; container-type: scroll-state; position: sticky; top: 0; /* When the nav is in a "stuck" state */ @container scroll-state(stuck: top) { border-radius: 0; box-shadow: 0 3px 10px hsl(0 0 0 / .25); width: 100%; } } It seems to work when nesting other selectors in there. So, for example, we can change the links in the menu when the navigation is in its stuck state: .sticky-nav { /* Same as before */ a { color: #000; font-size: 1rem; } /* When the nav is in a "stuck" state */ @container scroll-state(stuck: top) { /* Same as before */ a { color: orangered; font-size: 1.5rem; } } } So, yeah. As I was saying, it must be pretty cool to be on the Chrome developer team and get ahead of stuff like this, as it’s released. Big ol’ thanks to Bramus and Adam for consistently cluing us in on what’s new and doing the great work it takes to come up with such amazing demos to show things off. Chrome 133 Goodies originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  22. by: Geoff Graham Fri, 31 Jan 2025 14:11:00 +0000 ::view-transition /* 👈 Captures all the clicks! */ └─ ::view-transition-group(root) └─ ::view-transition-image-pair(root) ├─ ::view-transition-old(root) └─ ::view-transition-new(root) The trick? It’s that sneaky little pointer-events property! Slapping it directly on the :view-transition allows us to click “under” the pseudo-element, meaning the full page is interactive even while the view transition is running. ::view-transition { pointer-events: none; } I always, always, always forget about pointer-events, so thanks to Bramus for posting this little snippet. I also appreciate the additional note about removing the :root element from participating in the view transition: :root { view-transition-name: none; } He quotes the spec noting the reason why snapshots do not respond to hit-testing: Keeping the page interactive while a View Transition is running originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  23. by: Abhishek Kumar Fri, 31 Jan 2025 17:03:02 +0530 I’ve been using Cloudflare Tunnel for over a year, and while it’s great for hosting static HTML content securely, it has its limitations. For instance, if you’re running something like Jellyfin, you might run into issues with bandwidth limits, which can lead to account bans due to their terms of service. Cloudflare Tunnel is designed with lightweight use cases in mind, but what if you need something more robust and self-hosted? Let me introduce you to some fantastic open-source alternatives that can give you the freedom to host your services without restrictions. 1. ngrok (OSS Edition)ngrok is a globally distributed reverse proxy designed to secure, protect, and accelerate your applications and network services, regardless of where they are hosted. Acting as the front door to your applications, ngrok integrates a reverse proxy, firewall, API gateway, and global load balancing into one seamless solution. Although the original open-source version of ngrok (v1) is no longer maintained, the platform continues to contribute to the open-source ecosystem with tools like Kubernetes operators and SDKs for popular programming languages such as Python, JavaScript, Go, Rust, and Java. Key features: Securely connect APIs and databases across networks without complex configurations.Expose local applications to the internet for demos and testing without deployment.Simplify development by inspecting and replaying HTTP callback requests.Implement advanced traffic policies like rate limiting and authentication with a global gateway-as-a-service.Control device APIs securely from the cloud using ngrok on IoT devices.Capture, inspect, and replay traffic to debug and optimize web services.Includes SDKs and integrations for popular programming languages to streamline workflows.ngrok2. frp (Fast Reverse Proxy)frp (Fast Reverse Proxy) is a high-performance tool designed to expose local servers located behind NAT or firewalls to the internet. Supporting protocols like TCP, UDP, HTTP, and HTTPS, frp enables seamless request forwarding to internal services via custom domain names. It also includes a peer-to-peer (P2P) connection mode for direct communication, making it a versatile solution for developers and system administrators. Key features:Expose local servers securely, even behind NAT or firewalls, using TCP, UDP, HTTP, or HTTPS protocols.Provide token and OIDC authentication for secure connections.Support advanced configurations such as encryption, compression, and TLS for enhanced security.Enable efficient traffic handling with features like TCP stream multiplexing, QUIC protocol support, and connection pooling.Facilitate monitoring and management through a server dashboard, client admin UI, and Prometheus integration.Offer flexible routing options, including URL routing, custom subdomains, and HTTP header rewriting.Implement load balancing and service health checks for reliable performance.Allow for port reuse, port range mapping, and bandwidth limits for granular control.Simplify SSH tunneling with a built-in SSH Tunnel Gateway.Fast reverse proxy3. localtunnelLocaltunnel is an open-source, self-hosted tool that simplifies the process of exposing local web services to the internet. By creating a secure tunnel, Localtunnel allows developers to share their local resources without needing to configure DNS or firewall settings. It’s built on Node.js and can be easily installed using npm. While Localtunnel is straightforward and effective, the project hasn't seen active maintenance since 2022, and the default Localtunnel.me server's long-term reliability is uncertain. However, you can host your own Localtunnel server for better control and scalability. Key featuresSecure HTTPS for all tunnels, ensuring safe connections.Share your local development environment with a unique, publicly accessible URL.Test webhooks and external API callbacks with ease.Integrate with cloud-based browser testing tools for UI testing.Restart your local server seamlessly, as Localtunnel automatically reconnects.Request a custom subdomain or proxy to a hostname other than localhost for added flexibility.localtunnel4. boringproxyboringproxy is a reverse proxy and tunnel manager designed to simplify the process of securely exposing self-hosted web services to the internet. Whether you're running a personal website, Nextcloud, Jellyfin, or other services behind a NAT or firewall, boringproxy handles all the complexities, including HTTPS certificate management and NAT traversal, without requiring port forwarding or extensive configuration. It’s built with self-hosters in mind, offering a simple, fast, and secure solution for remote access. Key features100% free and open source under the MIT license, ensuring transparency and flexibility.No configuration files required—boringproxy works with sensible defaults and simple CLI parameters for easy adjustments. No need for port forwarding, NAT traversal, or firewall rule configuration, as boringproxy handles it all.End-to-end encryption with optional TLS termination at the server, client, or application, integrated seamlessly with Let's Encrypt.Fast web GUI for managing tunnels, which works great on both desktop and mobile browsers.Fully configurable through an HTTP API, allowing for automation and integration with other tools.Cross-platform support on Linux, Windows, Mac, and ARM devices (e.g., Raspberry Pi and Android).SSH support for those who prefer using a standard SSH client for tunnel management.boringproxy5. zrokzrok is a next-generation, peer-to-peer sharing platform built on OpenZiti, a programmable zero-trust network overlay. It enables users to share resources securely, both publicly and privately, without altering firewall or network configurations. Designed for technical users, zrok provides frictionless sharing of HTTP, TCP, and UDP resources, along with files and custom content. Share resources with non-zrok users over the public internet or directly with other zrok users in a peer-to-peer manner.Works seamlessly on Windows, macOS, and Linux systems.Start sharing within minutes using the zrok.io service. Download the binary, create an account, enable your environment, and share with a single command.Easily expose local resources like localhost:8080 to public users without compromising security.Share "network drives" publicly or privately and mount them on end-user systems for easy access.Integrate zrok’s sharing capabilities into your applications with the Go SDK, which supports net.Conn and net.Listener for familiar development workflows.Deploy zrok on a Raspberry Pi or scale it for large service instances. The single binary contains everything needed to operate your own zrok environment.Leverages OpenZiti’s zero-trust principles for secure and programmable network overlays.zrok6. PagekitePageKite is a veteran in the tunneling space, providing HTTP(S) and TCP tunnels for more than 14 years. It offers features like IP whitelisting, password authentication, and supports custom domains. While the project is completely open-source and written in Python, the public service imposes limits, such as bandwidth caps, to prevent abuse. Users can unlock additional features and higher bandwidth through affordable payment plans. The free tier provides 2 GB of monthly transfer quota and supports custom domains, making it accessible for personal and small-scale use. Key featuresEnables any computer, such as a Raspberry Pi, laptop, or even old cell phones, to act as a server for hosting services like WordPress, Nextcloud, or Mastodon while keeping your home IP hidden.Provides simplified SSH access to mobile or virtual machines and ensures privacy by keeping firewall ports closed.Supports embedded developers with features like naming and accessing devices in the field, secure communications via TLS, and scaling solutions for both lightweight and large-scale deployments.Offers web developers the ability to test and debug work remotely, interact with secure APIs, and run webhooks, API servers, or Git repositories directly from their systems.Utilizes a global relay network to ensure low latency, high availability, and redundancy, with infrastructure managed since 2010.Ensures privacy by routing all traffic through its relays, hiding your IP address, and supporting both end-to-end and wildcard TLS encryption.Pagekite7. ChiselChisel is a fast and efficient TCP/UDP tunneling tool transported over HTTP and secured using SSH. Written in Go (Golang), Chisel is designed to bypass firewalls and provide a secure endpoint into your network. It is distributed as a single executable that functions as both client and server, making it easy to set up and use. Key featuresOffers a simple setup process with a single executable for both client and server functionality.Secures connections using SSH encryption and supports authenticated client and server connections through user configuration files and fingerprint matching.Automatically reconnects clients with exponential backoff, ensuring reliability in unstable networks.Allows clients to create multiple tunnel endpoints over a single TCP connection, reducing overhead and complexity.Supports reverse port forwarding, enabling connections to pass through the server and exit via the client.Provides optional SOCKS5 support for both clients and servers, offering additional flexibility in routing traffic.Enables tunneling through SOCKS or HTTP CONNECT proxies and supports SSH over HTTP using ssh -o ProxyCommand.Performs efficiently, making it suitable for high-performance requirements.Chisel8. TelebitTelebit has quickly become one of my favorite tunneling tools, and it’s easy to see why. It's still fairly new but does a great job of getting things done. By installing Telebit Remote on any device, be it your laptop, Raspberry Pi, or another device, you can easily access it from anywhere. The magic happens thanks to a relay system that allows multiplexed incoming connections on any external port, making remote access a breeze. Not only that, but it also lets you share files and configure it like a VPN. Key featuresShare files securely between devicesAccess your Raspberry Pi or other devices from behind a firewallUse it like a VPN for additional privacy and controlSSH over HTTPS, even on networks with restricted portsSimple setup with clear documentation and an installer script that handles everythingTelebit9. tunnel.pyjam.asAs a web developer, one of my favorite tools for quickly sharing projects with clients is tunnel.pyjam.as. It allows you to set up SSL-terminated, ephemeral HTTP tunnels to your local machine without needing to install any custom software, thanks to Wireguard. It’s perfect for when you want to quickly show someone a project you’re working on without the hassle of complex configurations. Key featuresNo software installation required, thanks to WireguardQuickly set up a reverse proxy to share your local servicesSSL-terminated tunnels for secure connectionsSimple to use with just a curl command to start and stop tunnelsIdeal for quick demos or temporary access to local projectstunnel.pyjam.asFinal thoughtsWhen it comes to tunneling tools, there’s no shortage of options and each of the projects we’ve discussed here offers something unique. Personally, I’m too deeply invested in Cloudflare Tunnel to stop using it anytime soon. It’s become a key part of my workflow, and I rely on it for many of my use cases. However, that doesn’t mean I won’t continue exploring these open-source alternatives. I’m always excited to see how they evolve. For instance, with tunnel.pyjam.as, I find it incredibly time-saving to simply edit the tunnel.conf file and run its WireGuard instance to quickly share my projects with clients. I’d love to hear what you think! Have you tried any of these open-source tunneling tools, or do you have your own favorites? Let me know in the comments.
  24. by: Abhishek Kumar DeepSeek has taken the AI world by storm. While it's convenient to use DeepSeek on their hosted website, we know that there's no place like 127.0.0.1. 😉 Source: The Hacker News However, with recent events, such as a cyberattack on DeepSeek AI that has halted new user registrations, or DeepSeek AI database exposed, it makes me wonder why not more people choose to run LLMs locally. Not only does running your AI locally give you full control and better privacy, but it also keeps your data out of someone else’s hands. In this guide, we'll walk you through setting up DeepSeek R1 on your Linux machine using Ollama as the backend and Open WebUI as the frontend. Let’s dive in! 📋 The DeepSeek version you will be running on the local system is a striped down version of actual DeepSeek that 'outperformed' ChatGPT. You'll need Nvidia/AMD graphics on your system to run it. Step 1: Install OllamaBefore we get to DeepSeek itself, we need a way to run Large Language Models (LLMs) efficiently. This is where Ollama comes in. What is Ollama?Ollama is a lightweight and powerful platform for running LLMs locally. It simplifies model management, allowing you to download, run, and interact with models with minimal hassle. The best part? It abstracts away all the complexities, no need to manually configure dependencies or set up virtual environments. Installing OllamaThe easiest way to install Ollama is by running the following command in your terminal: curl -fsSL https://ollama.com/install.sh | sh Once installed, verify the installation: ollama --versionNow, let's move on to getting DeepSeek running with Ollama. Step 2: Install and run DeepSeek modelWith Ollama installed, pulling and running the DeepSeek model is really simple as running this command: ollama run deepseek-r1:1.5bThis command downloads the DeepSeek-R1 1.5B model, which is a small yet powerful AI model for text generation, answering questions, and more. The download may take some time depending on your internet speed, as these models can be quite large. Once the download is complete, you can interact with it immediately in the terminal: But let’s be honest, while the terminal is great for quick tests, it’s not the most polished experience. It would be better to use a Web UI with Ollama. While there are many such tools, I prefer Open WebUI. 12 Tools to Provide a Web UI for Ollama Don’t want to use the CLI for Ollama for interacting with AI models? Fret not, we have some neat Web UI tools that you can use to make it easy! It's FOSSAnkush Das Step 3: Setting up Open WebUIOpen WebUI provides a beautiful and user-friendly interface for chatting with DeepSeek. There are two ways to install Open WebUI: Direct Installation (for those who prefer a traditional setup) Docker Installation (my personal go-to method) Don't worry, we'll be covering both. Method 1: Direct installationIf you prefer a traditional installation without Docker, follow these steps to set up Open WebUI manually. Step 1: Install python & virtual environmentFirst, ensure you have Python installed along with the venv package for creating an isolated environment. Run the following command: sudo apt install python3-venv -y This installs the required package for managing virtual environments. Step 2: Create a virtual environmentNext, create a virtual environment inside your home directory: python3 -m venv ~/open-webui-venv and then activate the virtual environment we just created: source ~/open-webui-venv/bin/activate You'll notice your terminal prompt changes, indicating that you’re inside the virtual environment. Step 4: Install Open WebUIWith the virtual environment activated, install Open WebUI by running: pip install open-webui This downloads and installs Open WebUI along with its dependencies. Step 5: Run Open WebUITo start the Open WebUI server, use the following command: open-webui serve Once the server starts, you should see output confirming that Open WebUI is running. Step 6: Access Open WebUI in your browserOpen your web browser and go to: http://localhost:8080 You'll now see the Open WebUI interface, where you can start chatting with DeepSeek AI! Method 2: Docker installation (Personal favorite)If you haven't installed Docker yet, no worries! Check out our step-by-step guide on how to install Docker on Linux before proceeding. Once that's out of the way, let's get Open WebUI up and running with Docker. Step 1: Pull the Open WebUI docker imageFirst, download the latest Open WebUI image from Docker Hub: docker pull ghcr.io/open-webui/open-webui:main This command ensures you have the most up-to-date version of Open WebUI. Step 2: Run Open WebUI in a docker containerNow, spin up the Open WebUI container: docker run -d \ -p 3000:8080 \ --add-host=host.docker.internal:host-gateway \ -v open-webui:/app/backend/data \ --name open-webui \ --restart always \ ghcr.io/open-webui/open-webui:main Don’t get scared looking at that big, scary command. Here’s what each part of the command actually does: Command Explanation docker run -d Runs the container in the background (detached mode). -p 3000:8080 Maps port 8080 inside the container to port 3000 on the host. So, you’ll access Open WebUI at http://localhost:3000. --add-host=host.docker.internal:host-gateway Allows the container to talk to the host system, useful when running other services alongside Open WebUI. -v open-webui:/app/backend/data Creates a persistent storage volume named open-webui to save chat history and settings. --name open-webui Assigns a custom name to the container for easy reference. --restart always Ensures the container automatically restarts if your system reboots or if Open WebUI crashes. ghcr.io/open-webui/open-webui:main This is the Docker image for Open WebUI, pulled from GitHub’s Container Registry. Step 3: Access Open WebUI in your browserNow, open your web browser and navigate to: http://localhost:8080 .You should see Open WebUI's interface, ready to use with DeepSeek! Once you click on "Create Admin Account," you'll be welcomed by the Open WebUI interface. Since we haven't added any other models yet, the DeepSeek model we downloaded earlier is already loaded and ready to go. Just for fun, I decided to test DeepSeek AI with a little challenge. I asked it to: "Write a rhyming poem under 20 words using the words: computer, AI, human, evolution, doom, boom." And let's just say… the response was a bit scary. 😅 Here's the full poem written by DeepSeek R1: ConclusionAnd there you have it! In just a few simple steps, you’ve got DeepSeek R1 running locally on your Linux machine with Ollama and Open WebUI. Whether you’ve chosen the Docker route or the traditional installation, the setup process is straightforward, and should work on most Linux distributions. So, go ahead, challenge DeepSeek to write another quirky poem, or maybe put it to work on something more practical. It’s yours to play with, and the possibilities are endless. For instance, I recently ran DeepSeek R1 on my Raspberry Pi 5, while it was a bit slow, it still got the job done. Who knows, maybe your next challenge will be more creative than mine (though, I’ll admit, that poem about "doom" and "boom" was a bit eerie! 😅). Enjoy your new local AI assistant, and happy experimenting! 🤖
  25. By: Janus Atienza Fri, 31 Jan 2025 00:11:21 +0000 In today’s competitive digital landscape, small businesses need to leverage every tool and strategy available to stay relevant and grow. One such strategy is content marketing, which has proven to be an effective way to reach, engage, and convert potential customers. However, for many small business owners, managing content creation and distribution can be time-consuming and resource-intensive. This is where outsourcing content marketing services comes into play. Let’s explore why this approach is not only smart but also essential for the long-term success of small businesses. 1. Expertise and Professional QualityOutsourcing content marketing services allows small businesses to tap into the expertise of professionals who specialize in content creation and marketing strategies. These experts are equipped with the skills, tools, and experience necessary to craft high-quality content that resonates with target audiences. Whether it’s blog posts, social media updates, or email newsletters, professional content marketers understand how to write compelling copy that engages readers and drives results. For Linux/Unix focused content, this might include experts who understand shell scripting for automation or using tools like grep for SEO analysis. In addition, they are well-versed in SEO best practices, which means they can optimize content to rank higher in search engines, ultimately driving more traffic to your website. This level of expertise is difficult to replicate in-house, especially for small businesses with limited resources. 2. Cost EfficiencyFor many small businesses, hiring a full-time in-house marketing team may not be financially feasible. Content creation involves a range of tasks, from writing and editing to publishing and promoting. This can be a significant investment in terms of both time and money. By outsourcing content marketing services, small businesses can access the same level of expertise without the overhead costs associated with hiring additional employees. This can be especially true in the Linux/Unix world, where open-source tools can significantly reduce software costs. Outsourcing allows businesses to pay only for the services they need, whether it’s a one-off blog post or an ongoing content strategy. This flexibility can help businesses manage their budgets effectively while still benefiting from high-quality content marketing efforts. 3. Focus on Core Business FunctionsOutsourcing content marketing services frees up time for small business owners and their teams to focus on core business functions. Small businesses often operate with limited personnel, and each member of the team is usually responsible for multiple tasks. When content marketing is outsourced, the business can concentrate on what it does best—whether that’s customer service, product development, or sales—without getting bogged down in the complexities of content creation. For example, a Linux system administrator can focus on server maintenance instead of writing blog posts. This improved focus on core operations can lead to better productivity and business growth, while the outsourced content team handles the strategy and execution of the marketing efforts. 4. Consistency and ReliabilityOne of the key challenges of content marketing is maintaining consistency. Inconsistent content delivery can confuse your audience and hurt your brand’s credibility. Outsourcing content marketing services ensures that content is consistently produced, published, and promoted according to a set schedule. Whether it’s weekly blog posts or daily social media updates, a professional team will adhere to a content calendar, ensuring that your business maintains a strong online presence. This can be further enhanced by using automation scripts (common in Linux/Unix environments) to schedule and distribute content. Consistency is crucial for building a loyal audience, and a reliable content marketing team will ensure that your business stays top-of-mind for potential customers. 5. Access to Advanced Tools and TechnologiesEffective content marketing requires the use of various tools and technologies, from SEO and analytics platforms to content management systems and social media schedulers. Small businesses may not have the budget to invest in these tools or the time to learn how to use them effectively. Outsourcing content marketing services allows businesses to benefit from these advanced tools without having to make a significant investment. This could include access to specialized Linux-based SEO tools or experience with open-source CMS platforms like Drupal or WordPress. Professional content marketers have access to premium tools that can help with keyword research, content optimization, performance tracking, and more. These tools provide valuable insights that can inform future content strategies and improve the overall effectiveness of your marketing efforts. 6. ScalabilityAs small businesses grow, their content marketing needs will evolve. Outsourcing content marketing services provides the flexibility to scale efforts as necessary. Whether you’re launching a new product, expanding into new markets, or simply need more content to engage your growing audience, a content marketing agency can quickly adjust to your changing needs. This is especially relevant for Linux-based businesses that might experience rapid growth due to the open-source nature of their offerings. This scalability ensures that small businesses can maintain an effective content marketing strategy throughout their growth journey, without the need to continually hire or train new employees. ConclusionOutsourcing content marketing services is a smart move for small businesses looking to improve their online presence, engage with their target audience, and drive growth. By leveraging the expertise, cost efficiency, and scalability that outsourcing offers, small businesses can focus on what matters most—running their business—while leaving the content marketing to the professionals. Especially for businesses in the Linux/Unix ecosystem, this allows them to concentrate on technical development while expert marketers reach their specific audience. In a digital world where content is king, investing in high-quality content marketing services can make all the difference. The post Content Marketing for Linux/Unix Businesses: Why Outsourcing Makes Sense appeared first on Unixmen.

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.