Hello, Guest! 👋 You're just a few clicks away from joining an exclusive space for tech enthusiasts, problem-solvers, and lifelong learners like you.
🔐Why Join? By becoming a member of CodeNameJessica, you’ll get access to: ✅ In-depth discussions on Linux, Security, Server Administration, Programming, and more ✅ Exclusive resources, tools, and scripts for IT professionals ✅ A supportive community of like-minded individuals to share ideas, solve problems, and learn together ✅Project showcases, guides, and tutorials from our members ✅Personalized profiles and direct messaging to collaborate with other techies
🌐Sign Up Now and Unlock Full Access! As a guest, you're seeing just a glimpse of what we offer. Don't miss out on the complete experience! Create a free account today and start exploring everything CodeNameJessica has to offer.
GNU Taler payment system being approved for Swiss use.
And other Linux news, tips, and, of course, memes!
This edition of FOSS Weekly is supported by PikaPods.
❇️ PikaPods: Enjoy Self-hosting Hassle-free
PikaPods allows you to quickly deploy your favorite open source software. All future updates are handled automatically by PikaPods while you enjoy using the software. PikaPods also share revenue with the original developers of the software.
Carmen from Mission Libre has started a petition to get Qualcomm to release fully-free drivers for their in-production chipsets. If the petition is signed by 5,000 people, a hardcopy of the petition and signatures will be mailed to Qualcomm's head office. We can get 5,000 signatures, can't we?
Also, learn a thing or two about MCP servers, the latest buzzword in the (AI) tech world.
✨ Apps Highlight
If you ever wanted to run an operating system inside your browser, then Puter is the solution for you. It is open source and can be self-hosted as well.
An It's FOSS reader created an FFmpeg AAC Audio Encoder Plugin for DaVinci Resolve. This will help you get effortless AAC audio encoding on Linux if you use DaVinci Resolve video editor.
📽️ Videos I am Creating for You
I tried Microsoft's new terminal editor on Linux! I hate to admit it but I liked what I saw here. This is an excellent approach. I wonder why Linux didn't have something like this before. See it in action 👇
In Xfce, you can use the panel item "Directory Menu" to get quick access to files from anywhere. This is like the Places extension in GNOME, but better.
In the configuration menu for it, provide the file extension in the following format *.txt;*.jsonc as shown in the screenshot above to access the files quickly. Clicking on those files opens it in the default app.
🤣 Meme of the Week
The ricing never stops! 👨💻
🗓️ Tech Trivia
On May 27, 1959, MIT retired the Whirlwind computer, a groundbreaking machine famous for pioneering real-time computing and magnetic core memory.
There are two main choices for getting VS Code on Arch Linux:
Install Code - OSS from Arch repositories
Install Microsoft's VS Code from AUR
I know. It's confusing. Let me clear the air for you.
VS Code is an open source project but the binaries Microsoft distributes are not open source. They have telemetry enabled in it.
Code - OSS is the actual open source version of VS Code.
Think of Code - OSS as Chromium browser and VS Code as Google Chrome (which is based on Chromium browser).
Another thing here is that some extensions will only work in VS Code, not in the de-Micorsoft Code - OSS.
This is why you should think it through if you want to use Microsoft's VS Code or its 100% open sourced version.
Let me show you the steps for both installation.
Method 1: Install Code - OSS
✅ Open source version of Microsoft VS Code ✅ Easy to install with a quick pacman command ❌ Some extensions may not work
This is simple. All you have to do is to ensure that your Arch system is updated:
pacman -Syu
And then install Code - OSS with:
pacman -S code
It cannot be simpler than this, can it?
As I mentioned earlier, you may find some extensions that do not work in the open source version of Code.
Also, I had noticed earlier that Ctrl+C - Ctrl+V was not working for copy paste. Instead, it was defaulted to Ctrl+Shift+C and Ctrl+Shift+V for reasons not known to me. I had not made any changes to key bindings or had opted for a Vim plugin.
Removing Code OSS
Removal is equally simple:
sudo pacman -R code
Method 2: Install the actual Microsoft's VS Code
✅ Popular Microsoft VS Code that is used by most people ✅ Access to all proprietary features and extensions in the marketplace ❌ Installation may take effort if you don't have an AUR helper
If you don't care too much about ethics, open source principles and just want to code without thinking it too much, go with VS Code.
There are a couple of VS Code offerings available in the AUR but the official one is this.
Before installing it, you should remove Code OSS
sudo pacman -R code
If you have an AUR helper like yay already installed, use it like this:
yay -S visual-studio-code-bin
Otherwise, install yay first and then use it to install the desired package.
Don't be deceived by the pretty looking screenshot above. I was using a different theme in VS Code.
Removal
You can use your AUR helper or the super reliable pacman command to remove Microsoft VS Code from Arch Linux.
sudo pacman -R visual-studio-code-bin
I let you enjoy your preferred version of VS Code on Arch Linux. Please feel free to use the comment section if you have questions or suggestions.
On Linux, there are man pages that come preloaded with any distribution. The man pages are basically help pages which you can access using the terminal.
You get an instruction manual when you purchase a new gadget, right? It is just like that.
If you want to know what a command does, just use the 'man' keyword followed by the command you would like to know about. While it may seem pretty straightforward, the user experience is a bit dull, as it is all a bunch of text without any decorations or any other features.
There are some man page alternatives that have tried to modernize the user experience, or give a specific focus to the man pages for particular users. Let me share my quick experience with them.
Love cheat sheets? So, you do not have to waste your time scrolling through a barrage of descriptions? That's what TLDR helps you with.
It gives short and actionable information for commands to follow.
TLDR working
Key Features:
Community-maintained help pages.
Simpler, more approachable complement to traditional man pages.
Help pages focused on practical examples
TL;DR stands for "Too Long; Didn't Read". It originated as Internet slang, where it is used to indicate that a long text (or parts of it) has been skipped as too lengthy.
Installation
🚧
You cannot have tldr and tealdeer installed at the same time.
Alright, if you are like me, and probably not looking for anything fancy, but just a colorful man page, you can use the Most pager.
Most as Pager
MOST is a powerful paging program. Supports multiple windows and can scroll left and right. It keeps the same good-old man page look with added colors.
Install
sudo apt install most
sudo dnf install most
sudo pacman -Syu most
Once installed, edit ~/~.bashrc:
nano ~/.bashrc
To add the line:
export PAGER='most'
For the latest most versions, color may not appear by default. In that case, below line to ~/.bashrc.
Considering you are using a distribution powered by GNOME desktop, you just need to search for the GNOME Help app from the menu. You can also access the same via the terminal using the command yelp.
Using GNOME Help (Yelp) to view man pages
Press CTRL to open the search bar and type the command that you want when using the terminal interface.
man:<command>
# For example
man:man
Or, if you are in a browser, go to the address bar (CTRL+L). Here, enter man:man. When asked to open the link in help, click on it.
AI is everywhere, even in your terminal. The proximity of AI in the tool lets you quickly use them.
There are a few terminals that come with built-in AI agents to help you get all sorts of help; from simple command suggestion to full-fledged deployment plans.
You may use them too if you are an AI aficionado. Warp is one such terminal which is not open source but hugely popular among modern Linux users.
While you have It's FOSS along with the traditional man pages to learn what most commands do on Linux, there are alternatives to man pages which could enhance your learning experience.
If you prefer a GUI, GNOME Help should be helpful or any similar equivalent pre-installed on your distribution. For terminal-based solutions, there are a couple you can try. Take a look at the feature set they offer, and install what you like the most.
What do you prefer the most? Let me know in the comments below!
However, there is a small catch when it comes to compatibility. If you have used several Obsidian-specific plugins, then your notes may not be fully compatible in other plain markdown editors.
In this article, we will take a look at Plugins in Obsidian, how you can install it, and also some essential plugins that can make your learning more effective.
But first, a quick heads-up: Obsidian offers two types of plugins:
Core Plugins: These are officially developed and maintained by the Obsidian team. While limited in number, they are stable and deeply integrated.
Community Plugins: Created by users in the Obsidian community, these plugins offer a wide variety of features, although they aren’t officially supported by the core team.
🚧
Note that some plugins may make your Markdown notes fully readable only in Obsidian. This can be a vendor lock in. Use plugins only according to your needs.
Using the core plugins
Core plugins are officially built by Obsidian. They will come pre-installed. So, naturally, that is the recommended method of installation when it comes to plugins.
Core plugins are displayed in Obsidian settings page. Click on the settings gear icon at the bottom of the Obsidian app window to go to the settings.
Click on the Settings gear
In the settings, select Core Plugins to view the Core plugins.
Select Core Plugins
Most of the core plugins are enabled when you install the Obsidian app. But some plugins will be disabled by default.
I have included a brief description under each plugin to know what the plugin does and enable/disable as needed.
I’ve found that community plugins are one of the best ways to boost Obsidian’s capabilities. There’s a massive collection to choose from, and at the time of writing this, there are 2,430 community plugins available for installation.
These plugins are built by third-party developers and go through an initial review process before being listed.
However, since they have the same level of access as Obsidian itself, it’s important to be cautious. If privacy and security are essential for your work, I suggest doing a bit of homework before installing any plugin, just to be safe.
Disable the restricted mode
To protect you from unofficial plugins, Obsidian starts with a restricted mode, where the community plugins are disabled. To install community plugins, you need to disable the restricted mode first, just like the auto blocker in some Android phones to block app installations from unauthorized sources.
Go to the Obsidian settings and select the Community Plugins option. Here, click on the "Turn on community plugins" button.
Turn on community plugins
This will disable the restricted mode. And, you are all set! 😄
Install community plugins
Once the restricted mode is disabled, you can browse for community plugins and get them installed.
Click on the Browse button
Use the Browse button to go to the plugins page, as shown in the screenshot above. You will reach the plugins store, that lists 2000+ plugins.
Do not worry about the numbers, just search for what you need, or browse through some suggested options, just like I did.
Plugins Store
When you have spotted a plugin that matches your need, click on it. Now, to install that plugin, use the Install button.
Click on the Install button
Once installed, you can see two additional buttons called Enable and Uninstall. As the name suggests, they are for enabling a plugin or uninstalling a plugin.
Enable/Uninstall a plugin
This can be done more efficiently from the Obsidian settings. For this, go to the Settings → Community plugins → Installed plugins. Here, use the toggle button to enable a plugin.
Enable Plugins in Settings
This section lists all the installed community plugins. You can enable/disable, uninstall, access plugin settings, assign a keybinding, or donate to that particular plugin.
Manually install plugins
🚧
I do not recommend this method, since most of the plugins are available in Obsidian store and have gone through an initial review.
Even though not recommended, if you want to install a plugin, manually, for version compatibility or other personal reasons, make sure to source it from the official repositories or websites.
If it is on GitHub, go to the release page of the plugin GitHub repository and download main.js, manifest.json, and style.css files.
Download Plugin files
Now, create a directory with the name of the project in the <Your-obsidian-vault>/.obsidian/plugins directory. Press CTRL+H to view hidden files.
Paste plugin contents
In my case, I tried Templater. Next, I transfer the downloaded files to this project directory. Now, open Obsidian and go to the Settings → Community plugins and enable the new plugin.
Enable manually installed plugin
Install beta version of plugins
This is not for regular users, but for those who want to be testers and reviewers of beta plugins. I usually do this to test interesting things or help with the development of plugins I believe in.
We are using the BRAT (Beta Reviewers Auto-Update Tool) to install and update beta versions of Obsidian plugins.
First, install the BRAT plugin from the Obsidian plugins store and enable it.
Install BRAT Plugin
Now, go to the GitHub repository of the plugin you want to install the beta version of. Copy the URL of the repository.
Select the BRAT plugin from Settings → Community plugins and click on the “Add beta plugin” button.
Click on the "Add beta plugin" button
Here, add the GitHub URL, select a version from the list, and click on the Add Plugin button.
Add URL and select version
You can see that the plugin has been added with BRAT. Since we selected a specific version, it is shown as frozen and cannot be updated. Select Latest as version to get updates.
Beta plugin added using BRAT
Update plugins
To update community plugins, go to Obsidian settings and select Community plugins.
Here, click on the Check for updates button.
If there is an update available, it will notify you.
There is an update available for one plugin.
Click on Update All to update all the plugins that have an update available. Or, scroll down and update individual plugins by clicking on the Update button.
Move community plugins
You can copy selected or all plugins from your directory to another vault to avoid installing everything from scratch.
Go to the <your-obsidian-vault>/.obsidian/plugins directory. Now, copy directories of those plugins you want to use in another vault.
Copy those directories to your new plugin directory for your other vault (or the newer vault) <your-new-vault>/.obsidian/plugins directory.
If there is no plugins directory in the new vault, create one. Once you open the new vault, you will be asked to trust the plugins.
If it is you, who copied all the folders and no others are involved, click on the "Trust author and enable plugins" button.
Or you can use the "Browse Vault in restricted mode" and then enable the plugins by going to Settings → Community plugins → Turn on Community plugins → Enable plugins.
Plugin security notification
In both cases, you don't have to install the plugin from scratch.
Don't forget to enable the plugins through Settings → Community plugins to start using them.
Remove a plugin
Removing a plugin is easy. Go to the community plugins in settings and click on the delete button (bin icon) adjacent to the plugin you want to remove.
Remove a plugin
Or, if you just want to disable all community plugins, you can turn on the restricted mode. Click on the Turn on and reload button in community plugins settings.
Turn on restricted mode
So, if you turn off the restricted mode, all the installed plugins will be enabled. Pretty easy, I know, right?
Another way to remove plugins is to delete specific folders in the plugins directory, but it is unnecessary unless you are testing something specific.
🚧
Don't use this method for everything since it is safer to do so from within Obsidian.
Go to the <your-obsiidian-vault>/.obsidian/plugins directory and remove the directory that has the name of the plugin you want to remove.
Now open Obsidian and you won't see that plugin. Voila!
Enjoy using Obsidian
I have shared many more Obsidian tips to improve your experience with this wonderful too.
It took me way longer than I’d like to admit to wrap my head around MCP servers.
At first glance, they sound like just another protocol in the never-ending parade of tech buzzwords decorated alongside AI.
But trust me, once you understand what they are, you start to see why people are obsessed with them.
This post isn’t meant to be the ultimate deep dive (I’ll link to some great resources for that at the end). Instead, consider it just a lil introduction or a starter on MCP servers.
And no, I’m not going to explain MCP using USB-C as a metaphor, if you get that joke, congrats, you’ve clearly been Googling around like the rest of us. If not… well, give it time. 😛
Its purpose is to improve how AI models interact with external systems, not by modifying the models themselves, but by providing them structured, secure access to real-world data, tools, and services.
An MCP server is a standalone service that exposes specific capabilities such as reading files, querying databases, invoking APIs, or offering reusable prompts, in a standardized format that AI models can understand.
Rather than building custom integrations for every individual data source or tool, developers can implement MCP servers that conform to a shared protocol.
This eliminates the need for repetitive boilerplate and reduces complexity in AI applications.
Quite a bit. Depending on how they’re set up, MCP servers can expose:
Resources – Stuff like files, documents, or database queries that an AI can read.
Tools – Actions like sending an email, creating a GitHub issue, or checking the weather.
Prompts – Predefined instructions or templates that guide AI behavior in repeatable ways.
Each of these is exposed through a JSON-RPC 2.0 interface, meaning AI clients can query what's available, call the appropriate function, and get clean, structured responses.https://www.anthropic.com/
So... how does an MCP server actually work?
MCP servers follow a well-defined architecture intended to standardize how AI models access external tools, data, and services.
Each part of the system has a clear role, contributing to a modular and scalable environment for AI integration.
Host Applications These are the environments where AI agents operate, such as coding assistants, desktop apps, or conversational UIs.
They don’t interact with external systems directly, but instead rely on MCP clients to broker those connections.
MCP Clients The client is responsible for managing the connection between the AI agent and the MCP server. It handles protocol-level tasks like capability discovery, permissions, and communication state.
Clients maintain direct, persistent connections to the server, ensuring requests and responses are handled correctly.
MCP Servers The server exposes defined capabilities such as reading files, executing functions, or retrieving documents using the Model Context Protocol.
Each server is configured to present these capabilities in a standardized format that AI models can interpret without needing custom integration logic.
Underlying Data or Tooling This includes everything the server is connected to: file systems, databases, external APIs, or internal services.
The server mediates access, applying permission controls, formatting responses, and exposing only what the client is authorized to use.
This separation of roles between the model host, client, server, and data source, allows AI applications to scale and interoperate cleanly.
Developers can focus on defining useful capabilities inside a server, knowing that any MCP-compatible client can access them predictably and securely.
Wait, so how are MCP Servers different from APIs?
Fair question. It might sound like MCP is just a fancy wrapper around regular APIs, but there are key differences:
Feature
Traditional API
MCP Server
Purpose
General software communication
Feed AI models with data, tools, or prompts
Interaction
Requires manual integration and parsing
Presents info in model-friendly format
Standardization
Varies wildly per service
Unified protocol (MCP)
Security
Must be implemented case-by-case
Built-in controls and isolation
Use Case
Backend services, apps, etc.
Enhancing AI agents like Claude or Copilot or Cursor
Basically, APIs were made for apps. MCP servers were made for AI.
Want to spin up your own self-hosted MCP Server?
While building a custom MCP server from scratch is entirely possible, you don’t have to start there.
There’s already a growing list of open-source MCP servers you can clone, deploy, and start testing with your preferred AI assistant like Claude, Cursor, or others.
mcpservers.org is an amazing website to find open-source MCP Servers
If you're interested in writing your own server or extending an existing one, stay tuned. We’re covering that in a dedicated upcoming post, we'll walk through the process step by step in an upcoming post, using the official Python SDK.
Make sure you’re following or better yet, subscribe, so you don’t miss it.
Want to learn more on MCP?
Here are a few great places to start:
I personally found this a good introduction to MCP Servers
And there you have it, a foundational understanding of what MCP servers are, what they can do, and why they’re quickly becoming a cornerstone in the evolving landscape of AI.
We’ve only just scratched the surface, but hopefully, this introduction has demystified some of the initial complexities and highlighted the immense potential these servers hold for building more robust, secure, and integrated AI applications.
Stay tuned for our next deep dive, where we’ll try and build an MCP server and a client from scratch with the Python SDK. Because really, the best way to learn is to get your hands dirty.
Imagine Oh My Zsh but for Bash. The Bash-it framework lets you enjoy a beautiful bash shell experience. I am just surprised that it is not called Oh My Bash 😜
Remember your favorite tech websites like Anand Tech or magazines like Linux Voice? They don't exist anymore.
In the age of AI Overview in search engines, more and more people are not even reaching the websites from where AI is 'copying' the text. As a result, your favorite websites continue to shut down.
More than ever, now is the most crucial time to save your favorite websites from AI onslaught.
If you like what we do and would love to support our work, please become It's FOSS Plus member. It costs $24 a year, i.e. $2 a month. Even a burger costs more than $2. For skipping a burger a month, you get an ad-free reading experience with the satisfaction of helping the desktop Linux community.
How about an open source, decentralized alternative to the likes of Discord and Slack? Peersuite is a self-hostable peer-to-peer workspace that isn't user data hungry.
In file managers like Nemo, Nautilus, etc., you can easily create file duplicates by pressing the CTRL key and dragging the file to a blank space in the window.
If you drop a file while pressing the CTRL key when in another folder, the file will be copied to that directory.
Use CTRL+Z to undo the file duplication. During this, your file manager will ask you whether you want to delete the copied file.
🤣 Meme of the Week
The man's got a Debian-flavored beard. 😆
🗓️ Tech Trivia
On May 18, 1998, the U.S. Department of Justice sued Microsoft, alleging that the company was illegally monopolizing the web browser market by integrating its Internet Explorer browser into its Windows operating system.
Working with code often involves repetition, changing variable names, updating values, tweaking class names, or adding the same prefix across several lines.
If you find yourself making the same changes again and again, line by line, then multi-cursor editing in Visual Studio Code can help simplify that process.
In this part of our ongoing VS Code series, we’ll take a closer look at this feature and how it can make everyday tasks quicker and more manageable.
Why use multiple cursors?
Multi-cursor editing lets you place more than one cursor in your file so you can edit several lines at once.
Instead of jumping between lines or writing the same change repeatedly, you can type once and apply that change across multiple places.
Here are a few common situations where it comes in handy:
Renaming a variable or function in multiple places.
Adding or removing the same snippet of code across several lines.
Editing repeated structures (like object keys, class names, or attribute values).
Commenting out a bunch of lines quickly.
Once you start using it, you’ll notice it helps reduce small repetitive tasks and keeps your focus on the code itself.
Placing multiple cursors: mouse and keyboard
There are two main ways to place multiple cursors in VS Code using the mouse or keyboard shortcuts.
Let’s start with the mouse-based approach, which is more visual and straightforward for beginners.
Then, we’ll move on to keyboard shortcuts, which are faster and more efficient once you’re comfortable.
Method 1: Using the mouse
To place cursors manually using your mouse:
Hold down Alt (Windows/Linux) or Option (Mac), then click anywhere you want to insert a new cursor.
Each click places a new blinking cursor. You can now type, delete, or paste, and the change will reflect at all cursor positions simultaneously.
To cancel all active cursors and return to a single one, press Esc.
This method is handy for quick edits where the lines aren’t aligned or when you want more control over cursor placement.
Method 2: Using keyboard shortcuts
The mouse method is a good starting point, but learning keyboard shortcuts can save more time in the long run.
Below are a few keyboard-driven techniques to add and manage multiple cursors efficiently.
Add Cursors Vertically in a Column
When you want to add cursors above or below the current line to edit a block of similar lines (like inserting or deleting the same code at the beginning of each line), use this shortcut:
Ctrl+ Alt + Up/Down arrow keys.
This aligns cursors in a vertical column, making it easier to apply the same action to adjacent lines.
Select the next occurrence of the current word
To select and edit repeated words one by one such as variable names or function calls, place your cursor on the word and use: Ctrl + D
Each press selects the next matching word and adds a cursor to it. You can press it repeatedly to continue selecting further matches.
Select all occurrences of a word in the file
If you want to update every instance of a word across the file at once, for example, replacing a class name or a repeated property, use: Ctrl + Shift + L
This selects all matching words and places a cursor at each one. It’s powerful, but use with care in large files to avoid unintentional edits.
Editing with multiple cursors
Once your cursors are in place, editing works just like usual:
Type to insert text across all cursors.
Use Backspace or Delete to remove characters.
Paste snippets — they get applied to each cursor position.
Standard commands like cut, copy, undo, and redo all function as expected.
Just keep an eye on alignment. If cursors are placed unevenly across lines, your edits might not be consistent.
Multi-cursor editing is one of those small but effective features in VS Code that can make repetitive tasks less of a chore.
You don’t need to learn all the shortcuts right away. Start simple, try placing cursors with Ctrl + D or selecting multiple lines vertically and build from there.
As you become more comfortable, these techniques will become second nature and help you focus more on writing logic and less on repeating edits.
While setting up a Raspberry Pi 5 for a new project, I decided to go with a headless setup - no display, keyboard, or mouse. I flashed the SD card, connected power, and waited for the Pi to appear on my network.
But nothing showed up. I scanned my network, double-checked the router’s client list, still no sign of the Pi. Without access to a display, I had no immediate way to see what was happening under the hood.
Then I noticed something: the green status LED was blinking in a repeating pattern. It wasn’t random, it looked deliberate. That small detail led me down a rabbit hole, and what I found was surprisingly useful.
The Raspberry Pi’s onboard LEDs aren’t just indicators, they’re diagnostic tools. When the Pi fails to boot, it can signal the cause through specific blink patterns.
If you know how to read them, you can identify problems like missing boot files, SD card issues, or hardware faults without plugging in a monitor.
In this guide, we’ll decode what those LED signals mean and how to use them effectively in your troubleshooting process.
📋
The placement, colors, and behavior of the status LEDs vary slightly across different Raspberry Pi models. In this guide, we'll go through the most popular models and explain exactly what each LED pattern means.
Raspberry Pi 5
The Raspberry Pi 5 is a major step up in terms of power and architecture. It packs a 2.4GHz quad-core ARM Cortex-A76 CPU, supports up to 16GB of LPDDR4X RAM, and includes PCIe, RTC, and power button support.
Raspberry Pi 5
But when it comes to diagnostics, the big upgrade is in the STAT LED.
On the Pi 5:
Red LED (PWR): Shows power issues (not always ON by default!)
Green LED (STAT): Shows SD card activity and blink codes
Ethernet LEDs: Show network status
Here’s what the green LED blink codes mean:
Long Flash
Short Flash
Meaning
0
3
Generic failure to boot
0
4
start.elf not found
0
7
kernel.img not found
0
8
SDRAM failure
0
9
Insufficient SDRAM
0
10
In HALT state
2
1
Boot device not FAT formatted
2
2
Failed to read boot partition
2
3
Extended partition not FAT
2
4
File signature/hash mismatch
3
1
SPI EEPROM error
3
2
SPI EEPROM write protected
3
3
I2C error
3
4
Invalid secure boot configuration
4
3
RP1 not found
4
4
Unsupported board type
4
5
Fatal firmware error
4
6
Power failure Type A
4
7
Power failure Type B
Thanks to the bootloader residing on the onboard EEPROM (Electrically Erasable Programmable Read-Only Memory), the Raspberry Pi 5 can perform much more detailed self-checks right from the start.
Raspberry Pi 4 & 400
The Raspberry Pi 4 and the keyboard-integrated Raspberry Pi 400 also feature sophisticated LED diagnostics, similar in many ways to the Pi 5.
Raspberry Pi 4B
The Raspberry Pi’s onboard LEDs aren’t just indicators, they’re diagnostic tools. They typically have:
Red LED (PWR): Indicates power status. On the Pi 4/400, this LED is solidON when the board is receiving sufficient power. If it's off or flickering, suspect a power issue.
Green LED (ACT): The activity LED. While showing SD card activity, like the Pi 5, it also flashes specific patterns to indicate boot issues.
Ethernet LEDs: Found on the Ethernet port (Pi 4 only), showing network link and activity.
Like the Pi 5, the Raspberry Pi 4 and 400 boot from onboard EEPROM, enabling them to run more detailed diagnostics than older models.
The flash codes for the green ACT LED on the Raspberry Pi 4 and 400 are identical to the Pi 5 codes listed above.
Long Flash
Short Flash
Meaning
0
3
Generic failure to boot
0
4
start.elf not found
0
7
kernel.img not found
0
8
SDRAM failure
0
9
Insufficient SDRAM
0
10
In HALT state
2
1
Boot device not FAT formatted
2
2
Failed to read boot partition
2
3
Extended partition not FAT
2
4
File signature/hash mismatch
3
1
SPI EEPROM error
3
2
SPI EEPROM write protected
3
3
I2C error
3
4
Invalid secure boot configuration
4
3
RP1 not found
4
4
Unsupported board type
4
5
Fatal firmware error
4
6
Power failure Type A
4
7
Power failure Type B
Raspberry Pi 3 Model B, B+, and A+
Moving back a generation, the Raspberry Pi 3 models were popular for their performance and features.
Raspberry Pi 3B+
These boards typically have:
Red LED (PWR):Solid ON when receiving adequate power. Off or flickering suggests a power problem.
Green LED (ACT): Indicates SD card activity. It also flashes error codes if the boot process fails.
Ethernet LEDs: Found on the Ethernet port (Model B and B+), showing network link and activity. The slimline Model A+ lacks the Ethernet port and thus these LEDs.
Unlike the Pi 4 and 5, the Raspberry Pi 3 boards rely entirely on the SD card for the initial boot process (there's no onboard EEPROM bootloader).
This means the diagnostic capabilities are slightly less extensive, but the green ACT LED still provides valuable clues about common boot problems.
Here's what the green ACT LED flashes mean on the Raspberry Pi 3 models:
Flashes
Meaning
3
start.elf not found
4
start.elf corrupt
7
kernel.img not found
8
SDRAM not recognized (bad image or damaged RAM)
Irregular
Normal read/write activity
Raspberry Pi 2 and Pi 1 (Model B, B+, A, A+)
This group covers some of the earlier but still widely used Raspberry Pi boards, including the Raspberry Pi 2 Model B, and the various iterations of the original Raspberry Pi 1 (Model B, Model B+, Model A, Model A+).
Raspberry Pi 1B+
Their LED setups are similar to the Pi 3:
Red LED (PWR):Solid ON for sufficient power. Off or flickering indicates a power problem.
Green LED (ACT): Shows SD card activity and signals boot errors.
Ethernet LEDs: Present on models with an Ethernet port (Pi 2 B, Pi 1 B, Pi 1 B+).
They lack advanced diagnostics and rely on the same basic LED flash codes as the Pi 3 series:
Flashes
Meaning
3
start.elf not found
4
start.elf corrupt
7
kernel.img not found
8
SDRAM not recognized
Irregular
Normal SD card activity
Raspberry Pi Zero and Zero W
The incredibly compact Raspberry Pi Zero and Zero W models are known for their minimalist design, and this extends to their LEDs as well.
Raspberry Pi Zero W
The most significant difference here is the absence of the Red (PWR) LED. The Pi Zero series only features:
Green LED (ACT): This is the only status LED. It indicates SD card activity and, importantly, signals boot errors.
Flashes
Meaning
3
start.elf not found
4
start.elf corrupt
7
kernel.img not found
8
SDRAM not recognized
Irregular
Normal SD activity
Since there's no PWR LED, diagnosing power issues can be slightly trickier initially. If the green ACT LED doesn't light up at all, it could mean no power, an improperly inserted SD card, or a corrupted image preventing any activity.
Pironman 5 Case With Tower Cooler and Fan
This dope Raspberry Pi 5 case has a tower cooler and dual RGB fans to keep the device cool. It also extends your Pi 5 with M.2 SSD slot and 2 standard HDMI ports.
Manually formatting code can be tedious, especially in fast-paced or collaborative development environments.
While consistent formatting is essential for readability and maintainability, doing it by hand slows you down and sometimes leads to inconsistent results across a project.
In this article, I’ll walk you through the steps to configure Visual Studio Code to automatically format your code each time you save a file.
We'll use the VS Code extension called Prettier, one of the most widely adopted tools for enforcing code style in JavaScript, TypeScript, and many other languages.
By the end of this guide, you'll have a setup that keeps your code clean with zero extra effort.
Step 1: Install Prettier extension in VS Code
To start, you'll need the Prettier - Code Formatter extension. This tool supports JavaScript, TypeScript, HTML, CSS, React, Vue, and more.
Open VS Code, go to the Extensions sidebar (or press Ctrl + Shift + X), and search for Prettier.
Click on Install and reload VS Code if prompted.
Step 2: Enable format on save
Now that Prettier is installed, let’s make it run automatically whenever you save a file.
Open Settings via Ctrl + , or by going to File > Preferences > Settings.
In the search bar at the top, type format on save and then Check the box for Editor: Format On Save.
This tells VS Code to auto-format your code whenever you save a file, but that’s only part of the setup.
Troubleshooting
If saving a file doesn’t automatically format your code, it’s likely due to multiple formatters being installed in VS Code. Here’s how to make sure Prettier is set as the default:
Open any file in VS Code and press Ctrl + Shift + P (or Cmd + Shift + P on Mac) to bring up the Command Palette.
Type “Format Document” and select the option that appears.
If multiple formatters are available, VS Code will prompt you to choose one.
Select “Prettier - Code formatter” from the list.
Now try saving your file again. If Prettier is correctly selected, it should instantly reformat the code on save.
In some cases, you might want to save a file without applying formatting, for example, when working with generated code or temporary formatting quirks. To do that, open the Command Palette again and run “Save Without Formatting.”
Optional: Advanced configuration
Prettier works well out of the box, but you can customize how it formats your code by adding a .prettierrc configuration file at the root of your project.
This configuration tells Prettier to use single quotes, add trailing commas where valid in ES5 (like in objects and arrays), and omit semicolons at the end of statements.
There are many other options available such as adjusting print width, tab width, or controlling how JSX and HTML are handled.
You can find the full list of supported options in Prettier’s documentation, but for most projects, a few key settings in .prettierrc go a long way.
Try It Out
Create or open any file, JavaScript, TypeScript, HTML, etc. Add some poorly formatted code.
Then simply save the file (Ctrl + S or Cmd + S), and watch Prettier instantly clean it up.
As you can see, Prettier neatly indents and spaces each part of the html code, even across different embedded languages.
Wrapping Up
It doesn't matter if you are vibe coding or doing everything on your own, proper formatting is a sign of writing good code.
We’ve already covered the fundamentals of writing clean, consistent code - indentation, spacing, and word wrap, and automatic formatting builds directly on top of those fundamentals.
Once configured, it removes the need to think about structure while coding, letting you focus on the logic.
If you're also wondering how to actually run JavaScript or HTML inside VS Code, we've covered that as well, so check those guides if you're setting up your workflow from scratch.
If you’re not already using automatic formatting, it’s worth making part of your workflow.
And if you use a different tool or approach, I’d be interested to hear how you’ve set it up, let us know in the comments. 🧑💻
Sausage is a word forming game, inspired by the classic Bookworm. Written in bash script, you can use it on any Linux distribution.
Playing Sausage
The goal of the game is simple.
Earn points by spotting words.
Longer word spotting results in coloured letters. Using coloured letters give more points.
Smaller words introduces red letters, which when reached bottom, you lose the game.
Installation
✋
Since it's a terminal-based game, it requires a few commands for installation. I advise learning the command line essentails from our terminal basics series.
Technically, you run Sausage from the script itself. Still, initially, it has created a few directories. This screenshot from the official repository shows them:
So, to 'uninstall' Sausage, you have to remove the cloned repository and if you want to remove the game related files, check the screenshot above and remove them.
Up for a (word) game?
If you ever played the classic Bookworm, Sausage will be pure nostalgia. And if you never played that before, it could still be fun to try it f you like these kinds of game.
It's one of those amusing things you can do in the terminal.
Sausage is a word forming game, inspired by the classic Bookworm. Written in bash script, you can use it on any Linux distribution.
Playing Sausage
The goal of the game is simple.
Earn points by spotting words.
Longer word spotting results in coloured letters. Using coloured letters give more points.
Smaller words introduces red letters, which when reached bottom, you lose the game.
Installation
✋
Since it's a terminal-based game, it requires a few commands for installation. I advise learning the command line essentails from our terminal basics series.
Technically, you run Sausage from the script itself. Still, initially, it has created a few directories. This screenshot from the official repository shows them:
So, to 'uninstall' Sausage, you have to remove the cloned repository and if you want to remove the game related files, check the screenshot above and remove them.
Up for a (word) game?
If you ever played the classic Bookworm, Sausage will be pure nostalgia. And if you never played that before, it could still be fun to try it f you like these kinds of game.
It's one of those amusing things you can do in the terminal.
❇️ Supercharge Your Search with Aiven for OpenSearch® – Get $100 Sign-Up Bonus! 🚀
If you've been searching for a way to effortlessly deploy and manage OpenSearch, I've got great news for you! Aiven for OpenSearch® lets you deploy powerful, fully managed search and analytics clusters across AWS, Google Cloud, DO and Azure – all without the hassle of infrastructure management.
🔥 Why Choose Aiven for OpenSearch®?
Streamlined Search Applications – Focus on building, not maintaining.
Real-Time Visualization – Instantly visualize your data with OpenSearch Dashboards.
99.99% Uptime – Reliable and always available.
Seamless Integrations – Plug into Kafka, Grafana, and more with a few clicks.
Sign up using this link and claim a $100 bonus credit to explore and test Aiven for OpenSearch®! 💰
Desktop Linux is mostly neglected by the industry but loved by the community. For the past 12 years, It's FOSS has been helping people use Linux on their personal computers. And we are now facing the existential threat from AI models stealing our content.
If you like what we do and would love to support our work, please become It's FOSS Plus member. It costs $24 a year (less than the cost of a burger meal each month) and you get an ad-free reading experience with the satisfaction of helping the desktop Linux community.
In Linux Mint Cinnamon panel, you can change the way time is displayed. Just right-click on the time in the panel and select Configure. In the configuration window, enable the "Use a custom date format" option.
Now, enter your preferred format in the "Date format" and "Date format for tooltip" fields.
You can click on the "Show information on date format syntax" button, which will lead you to a detailed documentation about available date format options if you feel lost.
🤣 Meme of the Week
The hate is real with this one. ☠️
🗓️ Tech Trivia
To challenge Intel's 486 dominance in the early 1990s, Texas Instruments (TI) sold their own line of 486 microprocessors. However, these TI-branded chips were actually designed by Cyrix, offering software compatibility at a potentially lower cost, yet ultimately failing to dethrone Intel in the microprocessor market.
Indentation is how code is visually spaced. It helps define structure, scope, and readability. For example, Python requires indentation to define blocks of code.
VS Code lets you customize indentation per file, per language, or globally.
Let’s explore all the ways to tweak that!
1. Change indentation via the status bar (per-file basis)
This is the easiest method and perfect when you're editing just one file.
Open a file in VS Code.
Look at the bottom-right corner of the window. You’ll see something like Spaces: 4 or Tab Size: 4.
Click that label, a menu pops up!
Now, you can choose:
Indent Using Tabs
Indent Using Spaces
And below that, choose how many spaces (2, 4, 8 - up to you).
Just changing the indentation setting doesn’t automatically re-indent the whole file. You’ll want to reformat the document too.
Here’s how:
Press Ctrl + Shift + P (Linux/Windows) or Cmd + Shift + P (macOS).
Type Format Document and select it.
Or use the shortcut:
Ctrl + Shift + I on Linux
Shift + Alt + F on Windows
Shift + Option + F on macOS
Boom! The file gets prettied up with your chosen indentation.
2. Set global indentation in user settings
Want to make your indentation choice apply to all new files in VS Code? Here’s how:
Open Command Palette with Ctrl + Shift + P or F1.
Type Preferences: Open User Settings.
In the Settings UI, search for Tab Size and set it (e.g., 4).
Then search Insert Spaces and make sure it’s checked.
This tells VS Code:
“Whenever I press Tab, insert 4 spaces instead.”
Also check for Detect Indentation, if it’s ON, VS Code will override your settings based on the file content. Disable it if you want consistency across files.
3. Set project-specific indentation (Workspace settings)
Maybe you want different indentation just for one project, not globally.
Open the project folder in VS Code.
Go to the Command Palette and select Preferences: Open Workspace Settings.
Switch to the Workspace tab.
Search and set the same Tab Size, Insert Spaces, and Detect Indentation options.
These get saved inside your project’s .vscode/settings.json file.
Perfect if you want 2-space indentation in a JS project but 4 spaces in a Python project you're working on separately.
4. Set indentation based on programming language
Now, here's the power-user move. Let’s say you want:
4 spaces for Python
2 spaces for JavaScript and TypeScript
Easy!
Open the Command Palette → Preferences: Open User Settings (JSON)
You can find all language identifiers in the VS Code docs if you want to customize more.
You can also drop this into your .vscode/settings.json file if you want project-level overrides.
Bonus Tip: Convert tabs to spaces (and vice versa)
Already working on a file but the indentation is inconsistent?
Open the Command Palette → Type Convert Indentation
Choose either:
Convert Indentation to Spaces
Convert Indentation to Tabs
You can also do this from the status bar at the bottom.
If you need to convert all tabs in the file to spaces:
Press Ctrl + F
Expand the search box
Enable Regex (.* icon)
Search for \t and replace it with two or four spaces
Wrapping up
Like word wrapping in VS Code, indentation may seem like a small thing, but it's one of the cornerstones of clean, readable code.
Whether you're coding solo or collaborating on big projects, being consistent with indentation helps avoid annoying bugs (especially in Python!) and keeps the codebase friendly for everyone.
VS Code makes it super easy to control indentation your way, whether you want to set it globally, per project, or even per language.
We’ll be back soon with another helpful tip in our VS Code series.
I had an old pair of hi-fi speakers gathering dust in a forgotten corner of the house.
The only problem? They needed a Bluetooth dongle and DAC to work, and I didn’t have either. But with my love for DIY and a determination to salvage my musical aspirations, I decided to take a different route.
I thought of giving my old speakers a new life by if converting them into Bluetooth speakers. In this article, I’ll take you through my journey of reviving these old speakers.
From putting together a DAC, connecting both speakers, and grappling with my first soldering iron (spoiler: it wasn’t pretty), to finally using my old Raspberry Pi 3 as the brains behind a fully functional Bluetooth speaker system.
It wasn’t perfect, but the experience taught me a lot and gave me a setup that delivers impressive sound without spending a fortune.
Let’s dive into the details!
What I used
I gathered a mix of new and existing components. Here’s everything I used for this project:
Two Hi-Fi Speakers: These were the stars of the show— old obviously that had been lying unused for years. Their sound potential was too good to ignore, and this project was all about giving them a second chance.
Yep, I forgot to clean the speakers before capturing this picture
DAC Chipset: A Digital-to-Analog Converter (DAC) was essential to drive the speakers. I used a basic DAC module that supported input from a 3.5mm jack and output for the speakers.
You need to check your speakers before ordering a DAC for yourself, It provides a stereo output of 30W each and requires 12-24V
Soldering Iron: This was my first time using a soldering iron, and let’s just say my initial attempts were far from perfect. I used it to solder the speaker wires to the DAC, which was crucial for connecting the entire system.
Simple ol' soldering iron, nothing fancy here. It gets the job done.
12V 2A Power Supply: To power the DAC, I used a 12V 2A adapter. Make sure your power supply matches the specifications of your DAC module for safe and efficient operation.
3.5mm Audio Cable: This was used to connect the DAC’s audio output to the Raspberry Pi’s 3.5mm jack.
Raspberry Pi 3: I used an old Raspberry Pi 3 that I had lying around. Any Raspberry Pi model with a 3.5mm jack will work for this project, but if you have a newer model with HDMI-only output, additional configuration may be required.
My Raspberry Pi 3
With these items in hand, I was ready to transform my speakers into a powerful Bluetooth system.
If you’re planning to try or follow along this project, you should likely already have some of these components at home, making it a cost-effective way to repurpose old equipment.
Connecting the DAC with the Speakers
The DAC I ordered didn’t come with convenient connectors, so I had to get my hands dirty—literally.
I rummaged through my dad’s toolbox and found an old soldering iron, which I hadn’t used before. After watching a couple of quick tutorials online, I felt brave enough to give it a shot.
Soldering the speaker wires to the DAC wasn’t as straightforward as I had imagined. But after a few tries, and a lot of patience, I managed to secure the wires in place.
Here's you can see my exceptional soldering skills
Before closing the speaker lids, I decided to test the connection directly. I powered up the DAC, connected it to the speakers, and played some music through a temporary audio input.
To my relief, sound filled the room. It wasn’t perfect yet, but it was enough to confirm that my soldering job worked.
With the DAC connected, I was ready to move on to the next part of the build!
Adding Bluetooth functionality with Raspberry Pi
There are countless guides and projects for turning a Raspberry Pi into a Bluetooth receiver, but I stumbled upon a GitHub project that stood out for its simplicity. It is called Raspberry Pi Audio Receiver.
The project had a script that automated the entire setup process, including installing all necessary dependencies. Here’s how I did it:
Download the Installation Script
First, I downloaded the script directly from the GitHub repository:
For first-timers or DIY enthusiasts new to this, the installation screen might seem a bit overwhelming. You’ll be prompted several times to install various components and make decisions about the setup.
Don’t worry, I’ll break down what’s happening so you can follow along with confidence.
Hostname: The script lets you set up the hostname (the internal name for your Raspberry Pi) and a visible device name (referred to as the "pretty hostname").
This visible name is what other devices will see when connecting via Bluetooth, AirPlay, or Spotify Connect. For example, you could name it something like DIY-Speakers.
Bluetooth Configuration: The script installs Bluetooth-related packages and sets up an agent to accept all incoming connections.
The Pi is configured to play audio via ALSA (Advanced Linux Sound Architecture), and a smart script disables Bluetooth discoverability whenever the Pi is connected to a device.
AirPlay 2 Setup: This feature installs Shairport Sync, allowing the Raspberry Pi to act as an AirPlay 2 receiver. It’s perfect for Apple users who want to stream music directly from their devices.
Spotify Connect: Finally, the script installs Raspotify, an open-source Spotify client for Raspberry Pi. This enables the Raspberry Pi to act as a Spotify Connect device, letting you stream music straight from the Spotify app on your phone or computer.
Each step is straightforward, but you’ll need to be present during the installation to approve certain steps and provide input.
This process takes about 5 minutes to complete, but once done, your Raspberry Pi transforms into a multi-functional audio receiver, supporting Bluetooth, AirPlay 2, and Spotify Connect.
Testing the DIY Bluetooth speakers
With the hardware setup complete and the Raspberry Pi configured as a Bluetooth audio receiver, it was time for the moment of truth - testing the DIY speakers.
The goal was to see how well this entire setup performed and whether all the effort I put in was worth it.
To test the system, I decided to connect the speakers to my smartphone via Bluetooth.
Sorry for the image quality, had to use an old phone to capture this image
After pairing, I opened my music app and selected a random song to play. The sound flowed seamlessly through the speakers.
I’ll admit, hearing music come out of the old hi-fi speakers felt incredibly rewarding. It was proof that all the soldering, scripting, and configuring had paid off.
How did It perform?
Audio Quality: The sound quality was surprisingly good for a DIY setup. The DAC delivered clear audio with no noise, and the hi-fi speakers held up well despite being unused for a long time.
Bluetooth Range: The range was decent since my Pi is in this plastic enclosure, I could move around my room and still maintain a stable connection.
Responsiveness: There was no noticeable delay or lag in audio playback, whether I streamed music or used Spotify Connect.
Final thoughts
This project was a blend of frustration, curiosity, and pure DIY joy. What started as an attempt to salvage some old, forgotten hi-fi speakers turned into a rewarding learning experience.
From figuring out how to solder for the first time (and not doing a great job) to repurposing my old Raspberry Pi 3 as a Bluetooth receiver, every step had its challenges but that’s what made it so satisfying.
The best part? Hearing music blast through those old speakers again, knowing I brought them back to life with a bit of effort and creativity.
It’s proof that you don’t always need to spend a fortune to enjoy modern tech; sometimes, all it takes is what you already have lying around and a willingness to tinker.
If you’ve got old speakers collecting dust, I highly recommend giving this a shot. It’s not just about the outcome; the journey itself is worth it.
💬 And if you did something like this in your home setup, please share it in the comments. I and other readers may get some interesting ideas for the next weekend projects.
And now it seems that Ubuntu is relying heavily on Rust re-implementations. In the upcoming Ubuntu 25.10, you'll see GNU Coreutils replaced with Rust-based uutils. The classic sudo command will also be replaced by Rust-based sudo-rs.
And other Linux news, tips, and, of course, memes!
This edition of FOSS Weekly is supported by AWS Valkey.
❇️ Scale Your Real-Time Apps with Amazon ElastiCache Serverless for Valkey
What’s Valkey? Valkey is the most permissive open source alternative to Redis stewarded by the Linux Foundation, which means it will always be open source.
What’s Amazon ElastiCache Serverless for Valkey? It’s a serverless, fully managed caching service delivering microsecond latency performance at 33% lower cost than other supported engines.
Even better, you can upgrade from ElastiCache for Redis OSS to ElastiCache for Valkey with zero downtime.
Don’t just take our word for it – customers are already seeing improvements in speed, responsiveness, and cost.
I have always considered Kazam to be the best screen recorder for Linux. For the past several years, it didn't see any development. But finally, there is Kazam 2.0 with newer features now.
🎟️ Free Webinar | How SOC Teams Save Time with ANY.RUN: Action Plan
Trusted by 15,000+ organizations, ANY.RUN knows how to solve SOC challenges. Join team leads, managers, and security pros to learn expert methods on how to:
With GNOME Tweaks, you can set app window focus from "Click to Focus" to "Focus on Hover". For doing that, open GNOME Tweaks and go into the Windows tab. Here, under Window Focus, click on "Focus on Hover". Now, enable the "Raise Windows When Focused" toggle button.
With this, whenever you hover over another window, it will be automatically focused. The window won't lose focus when the cursor is on the desktop. To revert to stock behavior, click on the "Click to Focus" option.
🤣 Meme of the Week
The list never ends! 🥲
🗓️ Tech Trivia
After Commodore declared bankruptcy in 1994, German company Escom AG bought its name and tech for $10 million, aiming to revive the iconic Amiga, but eventually sold the rights instead.
Word wrap automatically breaks a long line of text so it fits within your current editor window, without you needing to scroll horizontally. It doesn’t add line breaks to your file; it just wraps it visually.
Picture this: You’re writing a long JavaScript function or a long SQL query. Without word wrap, you’d be endlessly dragging that horizontal scrollbar. With it, everything folds neatly within view.
This is especially useful when:
You're working on a small screen.
You want cleaner screenshots of your code.
You prefer not to lose track of long lines.
Now, let's see how to turn it on or off when needed.
Method 1: The quickest toggle - Alt + Z
Yep, there’s a shortcut for it!
Open any file in VS Code.
Press Alt + Z on your keyboard.
And that’s it! Word wrap is toggled. Hit it again to switch it off.
Method 2: Use the command palette
Prefer something a bit more visual? The Command Palette is your go-to.
Press Ctrl + Shift + P (or Cmd + Shift + P on macOS).
Type Toggle Word Wrap.
Click the option when it appears.
This is ideal if you’re not sure of the shortcut or just want to double-check before toggling.
Method 3: Set a default from settings
Want word wrap always on (or always off) when you open VS Code? You can change the default behavior.
1. Go to File > Preferences > Settings
2. Search for “word wrap.”
3. Under Editor: Word Wrap, choose from the following options:
off: Never wrap.
on: Always wrap.
wordWrapColumn: Wrap at a specific column number.
bounded: Wrap at viewport or column, whichever is smaller.
💡
What’s “wordWrapColumn” anyway? It lets you define a column (like 20) at which VS Code should wrap lines. Great for keeping things tidy in teams with coding standards.
You can also tweak "editor.wordWrap" in settings.json if you prefer working directly with config files.
Wrapping up!
Word wrap might seem like a tiny detail, but it’s one of those “small things” that can make coding a lot more pleasant. Take the indentation settings for example, another crucial piece for code readability and collaboration. Yes, the tabs vs spaces debate lives on 😄
We’ll continue exploring more quick yet powerful tips to help you make the most of VS Code.
Until then, go ahead and wrap those words your way.
I have got my hands on this 10 inches touchscreen from SunFounder that is made for Raspberry Pi like devices.
If you are considering adding touch capability to your Raspberry Pi project, this could be a good contender for that.
I have used a few SunFounder products in the past but the Pironman case made me their fan. And I truly mean that. This is why before I opened the package, I had a feeling that this will be a solid device.
It is a well-thought device that gives a smooth touch experience. A single power cord runs both the screen and Pi. The on-board speakers give you more than just display although they are very basic.
All the interface remain available. The best thing is that it can be used with several other SBCs too.
From 3D printing to cyberdeck to home automation, how you use it is up to you.
The $149 price tag is decent for the quality of the touchscreen and the out of box experience it provides for the Raspberry Pi OS.
Technical specifications
Before we get into the nitty-gritty of performance, let's look at what you're actually getting with this display:
Specification
Details
Screen Size
10 inches (diagonal)
Resolution
1280 x 800 pixels
Panel Type
IPS (In-Plane Switching)
Touch Technology
Capacitive multi-touch (up to 10 points)
Connection
HDMI for display, USB for touch function
Compatible with
Raspberry Pi 4B, 3B+, 3B, 2B, Zero W
Power Supply
DC 12V/5A power supply with built-In USB-C PD
Audio
2 speakers
Dimensions
236mm x 167mm x 20mm
Viewing Angle
178° (horizontal and vertical)
Weight
Approximately 350g
Assembling
SunFounder has a thing for assembling. Like most of their other products, the touchscreen also needs some assembling. After all, it is properly called 'a 10 -inch DIY touchscreen' so there is obviously a DIY angle here.
The assembling should not take you more than 10–15 minutes to put all the pieces together.
The assembly basically requires attaching the single board computer with the screws, taping the speakers and connecting it to the touchscreen cable.
It's actually fun to do the assembly. Not everyone will be a fan of this but I am guessing if you are into maker's electronics, you won't be unhappy with the assembly requirement.
Experiencing SunFounder DIY Touchscreen
The device is powered by a 12V/5A DC power that also powers the Raspberry Pi with 5.1V/5A. There are LED lights at the back that indicate if the Pi is turned or not.
There is no on-board battery, in case you were wondering about that. It needs to be connected to the power supply all the time to function. Although, if you need, you can always attach a battery-powered system to it.
The display is IPS and the surface feels quite premium. Some people may find it a bit glossy and slippery but the IPS screens have the same look and feel in my experience.
Colors are vibrant, text is crisp, and the IPS panel means viewing angles are excellent.
The 10 point capacitive touch works out of the box. The touch response is quite good. I noticed that the double-click mouse action actually needs 3 quick taps. It took me some time to understand that it is the intended behavior.
My 4-years old daughter used it for playing a few games on GCompris and that worked very well. Actually, she sees the Raspberry Pi wallpaper and thinks it's her computer. I had to take the device off her hands as I didn't want her to use it as a tablet. I would prefer that she keeps on using a keyboard and mouse with her Pi.
On-screen keyboard
SunFounder claims that no drivers are required and the touchscreen is ready to be plugged in and play if you use Raspbian OS.
The official SunFounder document mentions that this package should be preinstalled in Raspbian OS but that was not the case for me. Not a major issue as the on-screen keyboard worked fine too after installing the missing package.
Before I forget, I should mention that the touchscreen also has two tiny speakers at the bottom. They are good enough for occasional cases where you need audio output. You won't need to plugin a headphone or external speakers in such cases.
But if you want anything more than that, you'll need to attach proper speakers. It really depends on what you need it for.
Dude, where is my stand?
It would have been nice to have some sort of stand with the screen. That would make it easier to use the touchscreen as a monitor on the table.
At first glance, it seems like it is more suitable as a wall mount to display your homelab dashboard or some other information.
But it's not completely impossible to use it without a dedicated stand on the desk. I used the extra M 2.5 screws to increase the length of the bottom two screws. That gave it a stand like appearance.
Little tweak to make a stand with extra screws
I thought I was smart to utilize those extra screws as a stand. Later I found out that it was intended for that purpose, as the official document also mentioned this trick.
I remember the older model of this touch screen used to have a dedicated stand.
Older model of SunFounder's Touchscreen had a dedicated stand
I still think that dedicated stand attachments would have been a better idea.
The answer always depends on what you need and what you want.
If you are on the lookout for a new touchscreen for your homelab or DIY projects, this is definitely worth a look.
Sure, the price tag is more than the official Raspberry Pi touchscreen but SunFounder's touchscreen has better quality (IPS), is bigger with better resolution, has speakers and supports more SBCs.
Basically, it is a premium device, whereas most touchscreen available on lower prices have a very toy-ish feel.
If affordibility is not a concern and you need excellent touch experience for your projects, I can surely recommend this product.
When it comes to Logseq, we have a very cool plugin, Markdown Table Editor that does the job neatly and greatly.
You can install this extension from the Logseq plugin Marketplace.
To create a table, press the / key. This will bring you a small popup search. Enter table here and select Markdown Table Editor.
This will create a popup window with a straight-forward interface to edit table entries. The interface is self-explanatory where you can add/delete columns, rows, etc.
0:00
/1:00
Creating Markdown table in Logseq using the Markdown Table Editor plugin.
Logseq follows a bullet blocks approach, with each data block is a properly indented bullet point.
Now, the point to note here is "Properly indented".
You should be careful about the organization of parent, child, and grandchild nodes (bullets) in Logseq. Otherwise, when you reference a particular block of a note in the future, not all related data will be retrieved. Some points may appear as part of another nested block, which destroys the whole purpose of linking.
Bullet-Threading extension will help you keep track of the position you are currently editing in the greater nested data tree. This is done by visually indicating the bullet path. Such an approach makes the current indent location visually clear for you.
0:00
/0:15
Example of Bullet Threading Extension
Never again loss track of data organization because of the lack of awareness about the indentation tree.
Tags is the best plugin to organize the data in logseq where there is only a very narrow difference between pages and tags. It is the context of usage that differentiate pages and tags from each other.
So, assigning single-word or small phrase tags to your notes will help you access and connect between the knowledge in the future.
The Tags extension will query the notes and list all the tags in your data collection; be it a #note, #[[note sample]], or tags:: Newtag tag.
You can arrange them alphabetically or according to the number of notes tagged with that specific tag.
🚧
As of February 1, 2025, the GitHub repository of this project was archived by the creator. Keep an eye on further development for hassle-free usage.
Tags Plugin listing available tags
You can install the plugin from the Logseq plugins Marketplace.
You can neatly organize document tree and scribble things and tag them properly. Each day in the Journal is an independent Markdown file in the Journals directory in your File manager.
Journal Markdown Files
But it may feel a bit crowded over time, and getting a note from a particular date often includes searching and scrolling the result.
The Journals Calendar plugin is a great help in this scenario. This plugin adds a small calendar button to the top bar of Logseq. You can click on it and select a date from the calendar. If there is no Journal at that date, it will create one for you.
0:00
/0:46
Journal Calendar Plugin in Logseq
Pages with Journals will be marked with a dot allowing you to distinguish them easily.
Todo Master plugin is a simple plugin that puts a neat progress bar next to a task. You can use this as a visual progress tracking.
You can press the slash command (/) and select TODO Master from there to add the progress bar to the task of your choice. Watch the video to understand it better.
Since Logseq follows a different approach for data management compared to popular tools like Obsidian, there is no built-in table of contents for a page.
There is a "Contents" page in Logseq, which has an entire different purpose. In this case, this real table of contents renderer plugin is a great relief.
Logseq plugin Marketplace has numerous plugins and themes available to choose from.
But you should be careful since third-party plugins can result in data losses sometimes. Weird, I know.
It is always good to take proper backup of the data, especially if you are following a local-first note management policy. You won't want to lose your notes, do you?
💬 Which Logseq plugin do you use the most? Feel free to suggest your recommendations in the comment section, so that other users may find useful!
Before the age of blogs, forums, and YouTube tutorials, Linux users relied on printed magazines to stay informed and inspired. Titles like Linux Journal, Linux Format, and Maximum Linux were lifelines for enthusiasts, packed with tutorials, distro reviews, and CD/DVDs.
These glossy monthly issues weren’t just publications—they were portals into a growing open-source world.
Let's recollect the memories of your favorite Linux magazines. Ever read them or had their subscription?
And other Linux news, tips, and, of course, memes!
This edition of FOSS Weekly is supported by PikaPods.
❇️ PikaPods: Enjoy Self-hosting Hassle-free
PikaPods allows you to quickly deploy your favorite open source software. All future updates are handled automatically by PikaPods while you enjoy using the software. PikaPods also share revenue with the original developers of the software.
Desktop Linux is mostly neglected by the industry but loved by the community. For the past 12 years, It's FOSS has been helping people use Linux on their personal computers. And we are now facing the existential threat from AI models stealing our content.
If you like what we do and would love to support our work, please become It's FOSS Plus member. It costs $24 a year (less than the cost of a burger meal each month) and you get an ad-free reading experience with the satisfaction of helping the desktop Linux community.
This e-book bundle is tailored for DevOps professionals and rookies alike—learn from a diverse library of hot courses like Terraform Cookbook, Continuous Deployment, Policy as Code and more.
In Brave Browser, you can open two tabs in a split view. First, select two tabs by Ctrl + Left-Click. Now, Right-Click on any tab and select "Open in split view". The two tabs will then be opened in a split view.
You can click on the three-dot button in the middle of the split to swap the position of tabs, unsplit tabs, and resize them.
🤣 Meme of the Week
We really need to value them more 🥹
🗓️ Tech Trivia
On April 27, 1995, the U.S. Justice Department sued to block Microsoft’s $2.1 billion acquisition of Intuit, arguing it would hurt competition in personal finance software. Microsoft withdrew from the deal shortly after.
🧑🤝🧑 FOSSverse Corner
Know of a way to rename many files on Linux in one go? Pro FOSSer Neville is looking for ways:
Mozilla's Firefox needs no introduction. It is one of the few web browsers around that is not based on Chromium, setting out to provide a privacy-focused browsing experience for its users.
Sadly, some recent maneuvers have landed it in hot water, the most recent of which was a policy change that resulted in an intense backlash from the open source community, who felt wronged.
The consensus being that Mozilla broke their promise of not selling user data, leading to widespread concern over the organization's commitment to user privacy.
Since then, they have tweaked Firefox's Terms of Use to better reflect how they handle user data, clarifying that they do not claim ownership over user content and that any data collected is used for maintaining and improving Firefox, in line with their Privacy Policy.
Behind the scenes, Mozilla has also been focusing on developing more AI-powered features for Firefox—an approach that has drawn mixed reactions, with many asking for improvements to the core, everyday browser functionality.
Luckily, they have finally delivered something on that front by implementing the long-requested Tab Groups feature.
Firefox Tab Groups: Why Should You Use It?
As the name implies, Tab Groups allows users to organize multiple open tabs into customizable, color-coded, and collapsible sections—making it significantly easier for users to reduce visual clutter, stay focused on priority tasks, and streamline workflows.
This can greatly boost productivity, especially when paired with the right tools and tips for optimizing your workflow on a Linux desktop. Being someone who has to go through a lot of material when researching topics, I fully understand the importance of efficient tab management on a web browser.
Using a tab grouping feature like this helps minimize distractions, keeps your browser organized, and ensures quick access to important information without you getting overwhelmed by an endless stack of tabs.
You can learn more about how this came to be on the announcement blog.
How to Group Tabs in Firefox?
If you are looking to integrate this neat feature into your workflow, then you have to first ensure that you are on Firefox 138 or later. After that, things are quite straightforward.
Open up a bunch of new tabs and drag/drop one onto the other. This should open up the "Create tab group" dialog. Here, enter the name for the tab group, give it a color, and then click on "Done".
You can right-click on existing tabs to quickly add them to tab groups, or remove them for easy reorganization into new groups.
Tab groups can be expanded or collapsed with a simple left-click, and you can drag them to rearrange as needed. If you accidentally close Firefox, or even do so intentionally, you can still access your previous tab groups by clicking the downward arrow button above the address bar.
Similarly, managing an existing tab group is easy—just right-click on the group to open the "Manage tab group" dialog. From there, you can rename the group, change its color, move it around, or delete it entirely.
Besides that, Mozilla has mentioned that they are already experimenting with AI-powered tools for organizing tabs by topic, which runs on their on-device AI implementation. It is live on the Firefox Nightly build and can be accessed from the "Suggest more of my tabs" button.
Logseq is different from the conventional note-taking applications in many aspects.
Firstly, it follows a note block approach, rather than a page-first approach for content organization. This allows Logseq to achieve data interlinking at the sentence level. That is, you can refer to any sentence of a note in any other note inside your database.
Another equally important feature is the “Special Pages”. These are the “Journals” and “Contents” pages. Both of these special pages have use-cases far higher than what their names indicate.
The Journals page
The “Journals” is the first page you will see when you open Logseq. Here, you can see dates as headings. The Logseq documentation suggests that a new user, before understanding Logseq better, should use this Journals page heavily for taking notes.
Journals Page
As the name suggests, this is the daily journals page. Whatever you write under a date will be saved as a separate Markdown file with the date as the title. You can see these pages in your file manager, too. Head to the location you use for Logseq, then visit the journals page.
Journals Markdown Files in File Manager
Let's see how to make this Journals page most useful.
Journal page as a daily diary
Let's start with the basics. The “Journals” page can be used as your daily diary page.
If you are a frequent diary writer, Logseq is the best tool to digitize your life experiences and daily thoughts.
Each day, a new page will be created for you.
If you need a page for a day in the past, Just click on the Create button on the bottom of Logseq window and select “New page”.
Click on Create → New Page
In the dialog, enter the date for the required journal in the format, Mar 20th, 2023. Press enter. This will create the Journal page for the specified that for you!
Create Journal page for an old date
Journal as a note organizer
If you have read the Logseq Pages and Links article in this series, you should recall the fact that Logseq considers the concept of Pages, Tags, etc. in almost similar manner. If you want to create a new note, the best way is to use the keyboard method:
#[[Note Title Goes Here]]
The above creates a page for you. Now, the best place to create a new page is the Journals page.
Logseq has a powerful backlink feature. With this, if you use the Journals page to create a new page, you don't need to add any date references inside the page separately, since at the very end of the page, you will have a backlink to that day's journal.
Note with date reference
This is beneficial because you can recall when a note was first created easily.
Journal as a to-do organizer
Logseq can be used as a powerful task manager application as well, and the Journals page plays a crucial role in it.
If you come across any task while you are in the middle of something, just open the Journals page in Logseq and press the / key.
Search and enter TODO. Then type the task you are about to do.
Once done, press / again and search for Date Picker. Select a date from the calendar.
0:00
/0:29
Creating a TODO task in Logseq
That's it. You have created a to-do item with a due date. Now, when the date arrives, you will get a link on that day's Journal page. Thus, when you open Logseq on that day, you will see this item.
It will also contain the link to the journal page from where you added the task.
Other than that, you can search for the TODO page and open it to see all your task list, marked with TODO.
0:00
/0:23
Search for the TODO page to list all the to-do tasks
Journal to manage tasks
Task management is not just adding due date to your tasks. You should be able to track a project and know at what stage a particular task is. For this, Logseq has some built-in tags/pages. For example, LATER, DOING, DONE, etc.
These tags can be accessed by pressing the / key and searching for the name.
For example, if you have some ideas that should be done at a later date, but not sure when exactly, add these with the LATER tag, just like the TODO tag explained above.
Now, you can search for the LATER tag to know what all tasks are added to that list.
0:00
/0:22
Using the LATER tag in Logseq
Using the Journal page is beneficial here because you will be able to recollect on what date a particular task was added, allowing you to get more insight about that task. This will help you more, if you have entered your thoughts of that day in the Journal.
The Contents Page
Logseq has a special Contents page type, but don't confuse it with the usual table of contents. That is not its purpose. Here, I will mention the way I use the contents page. You can create your own workflows once you know its potential.
You can think of the Contents page as a manually created Dashboard to your notes and database. Or, a simple home page from where you can access contents needed frequently.
The most interesting thing that sets the contents page apart from others is the fact that it will always be visible in the right sidebar. Therefore, if you enable the sidebar permanently, you can see the quick links in the contents all the time.
Edit the Contents page
As said above, the Contents page is available on the right sidebar. So click on the sidebar button in the top panel and select Contents. You can edit this page from this sidebar view, which is the most convenient way.
Click on the Sidebar button and select Contents
All the text formatting, linking, etc., that work on Logseq pages works on this page as well.
1. Add all important pages/tags
The first thing you can do is to add frequently accessed pages or tags.
For example, let's say you will be accessing the Kernel, Ubuntu, and APT tags frequently. So, what you can do is to add a Markdown heading:
## List of Tags
Now, link the tags right in there, one per line:
#Kernel
#Ubuntu
#APT
For better arrangement, you can use the Markdown horizontal rule after each section.
---
2. Link the task management pages
As discussed in the Journals section, you can have a variety of task related tags like TODO, LATER, WAITING, etc. So you can link each of these in the contents page:
## List of Tasks
#TODO
#LATER
#WAITING
---
🚧
Please note the difference between the Markdown heading and the Logseq tags. So, don't forget to add a space after the # if you are creating a Markdown header.
3. Quick access links
If you are visiting some websites daily, you can bookmark these websites on the contents page for quickly accessing them.
After all this, your contents page will look like this:
Contents page in Logseq
Wrapping Up
As you can see, you can utilize these pages in non-conventional ways to get a more extensive experience from Logseq. That's the beauty of this open-source tool. The more you explore, the more you discover, the more you enjoy.
In the next part of this series, I'll share my favorite Logseq extensions.
There is something about CachyOS. It feels fast. The performance is excellently smooth, specially if you have newer hardware.
I don't have data to prove it but my new Asus Zenbook that I bought in November last year is rocking CachyOS superbly.
The new laptop came with Windows, which is not surprising. I didn't replace Windows with Linux. Instead, I installed CachyOS in dual boot mode alongside Windows.
The thing is that it was straightforward to do so. Anything simple in the Arch domain is amusing in itself.
I understand that video may not be everyone's favorite format so I created this tutorial in the text format too.
There are a few things to note here:
An active internet connection is mandatory. Offline installation is not possible.
An 8 GB USB is needed to create the installation medium.
At least 40 GB free disk space (it could be 20 GB as well but that would be way too less).
Time and patience is of essence.
🚧
You should back up your important data on an external disk or cloud. It is rare that anything will go wrong, but if you are not familiar to dealing with disk partitions, a backup will save your day.
SPONSORED
Use Swiss-based pCloud storage
Back up important folders from your computer to pCloud, securely. Keep and recover old versions in up to 1 year.
You can create the live USB on any computer with the help of Ventoy. I used my TUXEDO notebook for this purpose.
Download Ventoy from the official Website. When you extract it, there will be a few executables in it to run it either in a browser or in a GUI. Use whatever you want.
Making sure that USB is plugged in, install Ventoy on it.
Once done, all you need to do is to drag the CachyOS ISO to the Ventoy disk. The example below shows it for Mint but it's the same for any Linux ISO.
Once I had the CachyOS live USB, I put it in the Asus Zenbook and restarted it. When the computer was starting up, pressing F2/F10 button took me to the BIOS Settings.
I did that to ensure that the system boots from the USB instead of the hard disk by changing the boot order.
Change boot priority
When the system booted next, Ventoy screen was visible and I could see the option to load the CachyOS live session.
Select CachyOS
I selected to boot in normal mode.
Normal Mode
There was an option to boot into CachyOS with NVIDIA. I went with the default option.
Open-source or closed-source drivers
While booting into CachyOS, I ran into an issue. There was a "Start Job is running..." message for more than a minute or two. I force restarted the system and the live USB worked fine the next time.
Start job duration notification
If this error persists for you, try to change the USB port or create live USB again.
Another issue I discovered by trial and error was relating to the password. CachyOS showed a login screen that seemed to be asking for username and password. As per the official docs, there are no password required in live session.
What I did was to change the display server to Wayland and then click the next button, and I was logged into the system without any password.
Select Wayland
Installing CachyOS
Again, active internet is mandatory to download the desktop environment and other packages.
Select the "Launch installer" option.
Click on "Launch Installer"
My system was not plugged into a power source but it had almost 98% battery and I knew that it could handle the quick installation easily.
System not connected to power source warning
Quite straight forward settings in the beginning. Like selecting time zone
Set Location
and keyboard layout.
Set keyboard layout
The most important step is the disk partition and I was pleasantly surprised to see that the Calamares installer detected Windows presence and gave option to install CachyOS alongside.
I have a single disk with Windows partition as well as EFI system partition.
All I had to do was to drag the slider and shrink the storage appropriately.
Storage settings
I gave more space to Linux because it was going to be my main operating system.
The next screen gave the options to install a desktop environment or window manager. I opted for GNOME. You can see why it is important to have active internet connection. The desktop environment is not on the ISO file. It needs to be downloaded first.
Select Desktop Environment
And a few additional packages are added to the list automatically.
Installing additional packages
And as the last interactive step of install, I created the user account.
Enter user credentials
A quick overview of what is going to be done at this point. Things looked fine so I hit the Install button.
Click on Install
And then just wait for a few minutes for the installation to complete.
Installation progress
When the installation completes, restart the system and take out the live USB. In my case, I forgot to take the USB out, but still booted from the hard disk.
Fixing the missing Windows from grub
When the system booted next, I could see the usual Grub bootloader screen but there was no Windows option in it.
Windows Boot Manager is absent
Fixing it was simple. I opened the grub config file for editing in Nano.
sudo nano /etc/default/grub
OS_PROBER was disabled, so I uncommented that line, saved the file and exited.
Uncomment OS Prober
The next step was to update grub to make it aware of the config changes.
sudo grub-mkconfig -o /boot/grub/grub.cfg
And on the next reboot, the Windows boot manager option there to let me use Windows.
Windows Boot Manager in the boot screen
This is what I did to install CachyOS Linux alongside Windows. For an Arch-based distro, the procedure was pretty standard, and that's a good thing. Installing Linux should not be super complicated.
💬 If you tried dual booting CachyOS, do let me know how it went in the comment section.
If you have questions about using Linux or if you want to share something interesting you discovered with your Linux setup, you are more than welcome to utilize the Community.
And other Linux news, tips, and, of course, memes!
This edition of FOSS Weekly is supported by Valkey.
❇️ Valkey – The Drop-in Alternative to Redis OSS
With the change of Redis licensing in March of 2024 came the end of Redis as an open source project. Enter Valkey – the community driven fork that preserves and improves the familiar high-performance, key-value datastore for improving application performance.
Stewarded by the Linux foundation, Valkey serves as an open source drop-in alternative to Redis OSS – no code changes needed, with the same developer-friendly experience. For your open source database, check out Valkey.
Continuing the Logseq series, learn how to tag, link, and reference in Logseq the right way, and when you are done with that, you can try customizing it.
If you just installed or upgraded to the Ubuntu 25.04 release, here are 13 things you should do right away:
Alternatively, can you match the Linux distros with their logos?
💡 Quick Handy Tip
In GNOME File Manager (Nautilus), you can invert the selection of items using the keyboard shortcut CTRL + SHIFT + I.
🤣 Meme of the Week
Hah, this couldn't be more true. 😆
🗓️ Tech Trivia
On April 20, 1998, during a demonstration of a beta version of Windows 98 by Microsoft's Bill Gates, at COMDEX, the system crashed in the live event. Gates jokingly said, "That must be why we're not shipping Windows 98 yet". If you ever used Windows 98, you know that it should have never been shipped 😉
🧑🤝🧑 FOSSverse Corner
Can you help a newbie FOSSer with their search for a Linux distribution chart?
Logseq provides all the necessary elements you need for creating your knowledge base.
But one size doesn't fit all. You may need something extra that is either too complicated to achieve in Logseq or not possible at all.
What do you do, then? You use external plugins and extensions.
Thankfully, Logseq has a thriving marketplace where you can explore various plugins and extensions created by individuals who craved more from Logseq,
Let me show you how you can install themes and plugins.
🚧
Privacy alert! Do note that plugins can access your graph and local files. You'll see this warning in Logseq as well. More granular permission control system is not yet available at the moment.
Installing a plugin in Logseq
Click on the top-bar menu button and select Plugins as shown in the screenshot below.
Menu → Plugins
In the Plugins window, click on Marketplace.
Click on Marketplace tab
This will open the Logseq Plugins Marketplace. You can click on the title of a plugin to get the details about that plugin, including a sample screenshot.
Click on Plugin Title
If you find the plugin useful, use the Install button adjacent to the Plugin in the Marketplace section.
Install a Plugin
Managing Plugins
To manage a plugin, like enable/disable, fine-tune, etc., go to Menu → Plugins. This will take you to the Manage Plugin interface.
📋
If you are on the Marketplace, just use the Installed tab to get all the installed plugins.
Installed plugins section
Here, you can enable/disable plugins in Logseq using the corresponding toggle button. Similarly, hover over the settings gear icon for a plugin and select Open Settings option to access plugin configuration.
Click on Plugin settings gear icon
Installing themes in Logseq
Logseq looks good by default to me but you can surely experiment with its looks by installing new themes.
Similar to what you saw in plugin installation section, click on the Plugins option from Logseq menu button.
Click on Menu → Plugins
Why did I not click the Themes option above? Well, because that is for switching themes, not installing.
In the Plugins window, click on Marketplace section and select Themes.
Select Marketplace → Themes
Click on the title of a theme to get the details, including screenshots.
Logseq theme details page
To install a theme, use the Install button adjacent to the theme in Marketplace.
Click Install to install the theme
Enable/disable themes in Logseq
🚧
Changing themes is not done in this window. Theme switching will be discussed below.
All the installed themes will be listed in Menu → Plugins → Installed → Themes section.
Installed themes listed
From here, you can disable/enable themes using the toggle button.
Changing themes
Make sure all the desired installed themes are enabled because disabled themes won't be shown in the theme switcher.
Click on the main menu button and select the Themes option.
Click on Menu → Themes
This will bring a drop-down menu interface from where you can select a theme. This is shown in the short video below.
Updating plugins and themes
Occasionally, plugins and themes will provide updates.
To check for available plugin/theme updates, click on Menu → Plugins.
Here, select the Installed section to access installed Themes and Plugins. There should be a Check for Update button for each item.
Click on Check Update
Click on it to check if any updates are available for the selected plugin/theme.
Uninstall plugins and themes
By now you know that in Logseq, both Plugins and themes are considered as plugins. So, you can uninstall both in the same way.
First, click on Menu button and select the Plugins option.
Click on the Menu and select Plugins
Here, go to the Installed section. Now, if you want to remove an installed Plugin, go to the Plugins tab. Else, if you would like to remove an installed theme, go to the Themes tab.
Select Plugins or Themes Section
Hover over the settings gear of the item that needs to be removed and select the Uninstall button.
Uninstall a Plugin or Theme
When prompted for confirmation, click on Yes, and the plugin/theme will be removed.
Manage plugins from Logseq settings
Logseq settings provides a neat place for tweaking the installed Plugins and themes if they provide some extra settings.
Click on the menu button on the top-bar and select the Settings button.
Click on Menu → Settings
In the settings window, click on Plugins section.
Click on Plugins Section in Settings
Here, you can get a list of plugins and themes that offer some tweaks.
Plugin settings in Logseq Settings window
And that's all you need to know about exploring plugins and themes in Logseq. In the next tutorial in this series, I'll discuss special pages like Journal. Stay tuned.
Large Language Models (LLMs) are powerful, but they have one major limitation: they rely solely on the knowledge they were trained on.
This means they lack real-time, domain-specific updates unless retrained, an expensive and impractical process. This is where Retrieval-Augmented Generation (RAG) comes in.
RAG allows an LLM to retrieve relevant external knowledge before generating a response, effectively giving it access to fresh, contextual, and specific information.
Imagine having an AI assistant that not only remembers general facts but can also refer to your PDFs, notes, or private data for more precise responses.
This article takes a deep dive into how RAG works, how LLMs are trained, and how we can use Ollama and Langchain to implement a local RAG system that fine-tunes an LLM’s responses by embedding and retrieving external knowledge dynamically.
By the end of this tutorial, we’ll build a PDF-based RAG project that allows users to upload documents and ask questions, with the model responding based on stored data.
✋
I’m not an AI expert. This article is a hands-on look at Retrieval Augmented Generation (RAG) with Ollama and Langchain, meant for learning and experimentation. There might be mistakes, and if you spot something off or have better insights, feel free to share. It’s nowhere near the scale of how enterprises handle RAG, where they use massive datasets, specialized databases, and high-performance GPUs.
What is Retrieval-Augmented Generation (RAG)?
RAG is an AI framework that improves LLM responses by integrating real-time information retrieval.
Instead of relying only on its training data, the LLM retrieves relevant documents from an external source (such as a vector database) before generating an answer.
How RAG works
Query Input – The user submits a question.
Document Retrieval – A search algorithm fetches relevant text chunks from a vector store.
Contextual Response Generation – The retrieved text is fed into the LLM, guiding it to produce a more accurate and relevant answer.
Final Output – The response, now grounded in the retrieved knowledge, is returned to the user.
Why use RAG instead of fine-tuning?
No retraining required – Traditional fine-tuning demands a lot of GPU power and labeled datasets. RAG eliminates this need by retrieving data dynamically.
Up-to-date knowledge – The model can refer to newly uploaded documents instead of relying on outdated training data.
More accurate and domain-specific answers – Ideal for legal, medical, or research-related tasks where accuracy is crucial.
How LLMs are trained (and why RAG improves them)
Before diving into RAG, let’s understand how LLMs are trained:
Pre-training – The model learns language patterns, facts, and reasoning from vast amounts of text (e.g., books, Wikipedia).
Fine-tuning – It is further trained on specialized datasets for specific use cases (e.g., medical research, coding assistance).
Inference – The trained model is deployed to answer user queries.
While fine-tuning is helpful, it has limitations:
It is computationally expensive.
It does not allow dynamic updates to knowledge.
It may introduce biases if trained on limited datasets.
With RAG, we bypass these issues by allowing real-time retrieval from external sources, making LLMs far more adaptable.
Building a local RAG application with Ollama and Langchain
In this tutorial, we'll build a simple RAG-powered document retrieval app using LangChain, ChromaDB, and Ollama.
The app lets users upload PDFs, embed them in a vector database, and query for relevant information.
💡
All the code is available in our GitHub repository. You can clone it and start testing right away.
Installing dependencies
To avoid messing up our system packages, we’ll first create a Python virtual environment. This keeps our dependencies isolated and prevents conflicts with system-wide Python packages.
Navigate to your project directory and create a virtual environment:
cd ~/RAG-Tutorial
python3 -m venv venv
Now, activate the virtual environment:
source venv/bin/activate
Once activated, your terminal prompt should change to indicate that you are now inside the virtual environment.
With the virtual environment activated, install the necessary Python packages using requirements.txt:
pip install -r requirements.txt
This will install all the required dependencies for our RAG pipeline, including Flask, LangChain, Ollama, and Pydantic.
Once installed, you’re all set to proceed with the next steps!
Project structure
Our project is structured as follows:
RAG-Tutorial/
│── app.py # Main Flask server
│── embed.py # Handles document embedding
│── query.py # Handles querying the vector database
│── get_vector_db.py # Manages ChromaDB instance
│── .env # Stores environment variables
│── requirements.txt # List of dependencies
└── _temp/ # Temporary storage for uploaded files
Step 1: Creating app.py (Flask API Server)
This script sets up a Flask server with two endpoints:
/embed – Uploads a PDF and stores its embeddings in ChromaDB.
/query – Accepts a user query and retrieves relevant text chunks from ChromaDB.
route_embed(): Saves an uploaded file and embeds its contents in ChromaDB.
route_query(): Accepts a query and retrieves relevant document chunks.
import os
from dotenv import load_dotenv
from flask import Flask, request, jsonify
from embed import embed
from query import query
from get_vector_db import get_vector_db
load_dotenv()
TEMP_FOLDER = os.getenv('TEMP_FOLDER', './_temp')
os.makedirs(TEMP_FOLDER, exist_ok=True)
app = Flask(__name__)
@app.route('/embed', methods=['POST'])
def route_embed():
if 'file' not in request.files:
return jsonify({"error": "No file part"}), 400
file = request.files['file']
if file.filename == '':
return jsonify({"error": "No selected file"}), 400
embedded = embed(file)
return jsonify({"message": "File embedded successfully"}) if embedded else jsonify({"error": "Embedding failed"}), 400
@app.route('/query', methods=['POST'])
def route_query():
data = request.get_json()
response = query(data.get('query'))
return jsonify({"message": response}) if response else jsonify({"error": "Query failed"}), 400
if __name__ == '__main__':
app.run(host="0.0.0.0", port=8080, debug=True)
Step 2: Creating embed.py (embedding documents)
This file handles document processing, extracts text, and stores vector embeddings in ChromaDB.
allowed_file(): Ensures only PDFs are processed.
save_file(): Saves the uploaded file temporarily.
load_and_split_data(): Uses UnstructuredPDFLoader and RecursiveCharacterTextSplitter to extract text and split it into manageable chunks.
embed(): Converts text chunks into vector embeddings and stores them in ChromaDB.
import os
from datetime import datetime
from werkzeug.utils import secure_filename
from langchain_community.document_loaders import UnstructuredPDFLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter
from get_vector_db import get_vector_db
TEMP_FOLDER = os.getenv('TEMP_FOLDER', './_temp')
def allowed_file(filename):
return filename.lower().endswith('.pdf')
def save_file(file):
filename = f"{datetime.now().timestamp()}_{secure_filename(file.filename)}"
file_path = os.path.join(TEMP_FOLDER, filename)
file.save(file_path)
return file_path
def load_and_split_data(file_path):
loader = UnstructuredPDFLoader(file_path=file_path)
data = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=7500, chunk_overlap=100)
return text_splitter.split_documents(data)
def embed(file):
if file and allowed_file(file.filename):
file_path = save_file(file)
chunks = load_and_split_data(file_path)
db = get_vector_db()
db.add_documents(chunks)
db.persist()
os.remove(file_path)
return True
return False
Step 3: Creating query.py (Query processing)
It retrieves relevant information from ChromaDB and uses an LLM to generate responses.
get_prompt(): Creates a structured prompt for multi-query retrieval.
query(): Uses Ollama's LLM to rephrase the user query, retrieve relevant document chunks, and generate a response.
import os
from langchain_community.chat_models import ChatOllama
from langchain.prompts import ChatPromptTemplate, PromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough
from langchain.retrievers.multi_query import MultiQueryRetriever
from get_vector_db import get_vector_db
LLM_MODEL = os.getenv('LLM_MODEL')
OLLAMA_HOST = os.getenv('OLLAMA_HOST', 'http://localhost:11434')
def get_prompt():
QUERY_PROMPT = PromptTemplate(
input_variables=["question"],
template="""You are an AI assistant. Generate five reworded versions of the user question
to improve document retrieval. Original question: {question}""",
)
template = "Answer the question based ONLY on this context:\n{context}\nQuestion: {question}"
prompt = ChatPromptTemplate.from_template(template)
return QUERY_PROMPT, prompt
def query(input):
if input:
llm = ChatOllama(model=LLM_MODEL)
db = get_vector_db()
QUERY_PROMPT, prompt = get_prompt()
retriever = MultiQueryRetriever.from_llm(db.as_retriever(), llm, prompt=QUERY_PROMPT)
chain = ({"context": retriever, "question": RunnablePassthrough()} | prompt | llm | StrOutputParser())
return chain.invoke(input)
return None
CHROMA_PATH: Defines the storage location for ChromaDB.
COLLECTION_NAME: Sets the ChromaDB collection name.
LLM_MODEL: Specifies the LLM model used for querying.
TEXT_EMBEDDING_MODEL: Defines the embedding model for vector storage.
I'm using these light weight LLMs for this tutorial, as I don't have dedicated GPU to inference large models. | You can edit your LLMs in the .env file
Testing the makeshift RAG + LLM Pipeline
Now that our RAG app is set up, we need to validate its effectiveness. The goal is to ensure that the system correctly:
Embeds documents – Converts text into vector embeddings and stores them in ChromaDB.
Retrieves relevant chunks – Fetches the most relevant text snippets from ChromaDB based on a query.
Generates meaningful responses – Uses Ollama to construct an intelligent response based on retrieved data.
This testing phase ensures that our makeshift RAG pipeline is functioning as expected and can be fine-tuned if necessary.
Running the flask server
We first need to make sure our Flask app is running. Open a terminal, navigate to your project directory, and activate your virtual environment:
cd ~/RAG-Tutorial
source venv/bin/activate # On Linux/macOS
# or
venv\Scripts\activate # On Windows (if using venv)
Now, run the Flask app:
python3 app.py
If everything is set up correctly, the server should start and listen on http://localhost:8080. You should see output like:
Once the server is running, we'll use curl commands to interact with our pipeline and analyze the responses to confirm everything works as expected.
1. Testing Document Embedding
The first step is to upload a document and ensure its contents are successfully embedded into ChromaDB.
--header 'Content-Type: application/json' → Specifies that we are sending JSON data.
--data '{ "query": "Question about the PDF?" }' → Sends our search query to retrieve relevant information.
Expected Response:
What’s Happening Internally?
The query "Whats in this file?" is passed to ChromaDB to retrieve the most relevant chunks.
The retrieved chunks are passed to Ollama as context for generating a response.
Ollama formulates a meaningful reply based on the retrieved information.
If the Response is Not Good Enough:
Issue
Possible Cause
Fix
Retrieved chunks are irrelevant
Poor chunking strategy
Adjust chunk sizes and retry embedding
"llm_response": "I don't know"
Context wasn't passed properly
Check if ChromaDB is returning results
Response lacks document details
LLM needs better instructions
Modify the system prompt
3. Fine-tuning the LLM for better responses
If Ollama’s responses aren’t detailed enough, we need to refine how we provide context.
Tuning strategies:
Improve Chunking – Ensure text chunks are large enough to retain meaning but small enough for effective retrieval.
Enhance Retrieval – Increase n_results to fetch more relevant document chunks.
Modify the LLM Prompt – Add structured instructions for better responses.
Example system prompt for Ollama:
prompt = f"""
You are an AI assistant helping users retrieve information from documents.
Use the following document snippets to provide a helpful answer.
If the answer isn't in the retrieved text, say 'I don't know.'
Retrieved context:
{retrieved_chunks}
User's question:
{query_text}
"""
This ensures that Ollama:
Uses retrieved text properly.
Avoids hallucinations by sticking to available context.
Provides meaningful, structured answers.
Final thoughts
Building this makeshift RAG LLM tuning pipeline has been an insightful experience, but I want to be clear, I’m not an AI expert. Everything here is something I’m still learning myself.
There are bound to be mistakes, inefficiencies, and things that could be improved. If you’re someone who knows better or if I’ve missed any crucial points, please feel free to share your insights.
That said, this project gave me a small glimpse into how RAG works. At its core, RAG is about fetching the right context before asking an LLM to generate a response.
It’s what makes AI chatbots capable of retrieving information from vast datasets instead of just responding based on their training data.
Large companies use this technique at scale, processing massive amounts of data, fine-tuning their models, and optimizing their retrieval mechanisms to build AI assistants that feel intuitive and knowledgeable.
What we built here is nowhere near that level, but it was still fascinating to see how we can direct an LLM’s responses by controlling what information it retrieves.
Even with this basic setup, we saw how much impact retrieval quality, chunking strategies, and prompt design have on the final response.
This makes me wonder, have you ever thought about training your own LLM? Would you be interested in something like this but fine-tuned specifically for Linux tutorials?
Imagine a custom-tuned LLM that could answer your Linux questions with accurate, RAG-powered responses, would you use it? Let us know in the comments!