Jump to content

Welcome to CodeNameJessica

Welcome to CodeNameJessica!

💻 Where tech meets community.

Hello, Guest! 👋
You're just a few clicks away from joining an exclusive space for tech enthusiasts, problem-solvers, and lifelong learners like you.

🔐 Why Join?
By becoming a member of CodeNameJessica, you’ll get access to:
In-depth discussions on Linux, Security, Server Administration, Programming, and more
Exclusive resources, tools, and scripts for IT professionals
A supportive community of like-minded individuals to share ideas, solve problems, and learn together
Project showcases, guides, and tutorials from our members
Personalized profiles and direct messaging to collaborate with other techies

🌐 Sign Up Now and Unlock Full Access!
As a guest, you're seeing just a glimpse of what we offer. Don't miss out on the complete experience! Create a free account today and start exploring everything CodeNameJessica has to offer.

  • Entries

    66
  • Comments

    0
  • Views

    2513

Entries in this blog

by: Abhishek Kumar
Sun, 20 Apr 2025 14:46:21 GMT


Tuning Local LLMs With RAG Using Ollama and Langchain

Large Language Models (LLMs) are powerful, but they have one major limitation: they rely solely on the knowledge they were trained on.

This means they lack real-time, domain-specific updates unless retrained, an expensive and impractical process. This is where Retrieval-Augmented Generation (RAG) comes in.

RAG allows an LLM to retrieve relevant external knowledge before generating a response, effectively giving it access to fresh, contextual, and specific information.

Imagine having an AI assistant that not only remembers general facts but can also refer to your PDFs, notes, or private data for more precise responses.

This article takes a deep dive into how RAG works, how LLMs are trained, and how we can use Ollama and Langchain to implement a local RAG system that fine-tunes an LLM’s responses by embedding and retrieving external knowledge dynamically.

By the end of this tutorial, we’ll build a PDF-based RAG project that allows users to upload documents and ask questions, with the model responding based on stored data.

I’m not an AI expert. This article is a hands-on look at Retrieval Augmented Generation (RAG) with Ollama and Langchain, meant for learning and experimentation. There might be mistakes, and if you spot something off or have better insights, feel free to share. It’s nowhere near the scale of how enterprises handle RAG, where they use massive datasets, specialized databases, and high-performance GPUs.

What is Retrieval-Augmented Generation (RAG)?

RAG is an AI framework that improves LLM responses by integrating real-time information retrieval.

Instead of relying only on its training data, the LLM retrieves relevant documents from an external source (such as a vector database) before generating an answer.

How RAG works

  1. Query Input – The user submits a question.
  2. Document Retrieval – A search algorithm fetches relevant text chunks from a vector store.
  3. Contextual Response Generation – The retrieved text is fed into the LLM, guiding it to produce a more accurate and relevant answer.
  4. Final Output – The response, now grounded in the retrieved knowledge, is returned to the user.

Why use RAG instead of fine-tuning?

  • No retraining required – Traditional fine-tuning demands a lot of GPU power and labeled datasets. RAG eliminates this need by retrieving data dynamically.
  • Up-to-date knowledge – The model can refer to newly uploaded documents instead of relying on outdated training data.
  • More accurate and domain-specific answers – Ideal for legal, medical, or research-related tasks where accuracy is crucial.

How LLMs are trained (and why RAG improves them)

Before diving into RAG, let’s understand how LLMs are trained:

  1. Pre-training – The model learns language patterns, facts, and reasoning from vast amounts of text (e.g., books, Wikipedia).
  2. Fine-tuning – It is further trained on specialized datasets for specific use cases (e.g., medical research, coding assistance).
  3. Inference – The trained model is deployed to answer user queries.

While fine-tuning is helpful, it has limitations:

  • It is computationally expensive.
  • It does not allow dynamic updates to knowledge.
  • It may introduce biases if trained on limited datasets.

With RAG, we bypass these issues by allowing real-time retrieval from external sources, making LLMs far more adaptable.

Building a local RAG application with Ollama and Langchain

In this tutorial, we'll build a simple RAG-powered document retrieval app using LangChain, ChromaDB, and Ollama.

The app lets users upload PDFs, embed them in a vector database, and query for relevant information.

💡
All the code is available in our GitHub repository. You can clone it and start testing right away.

Installing dependencies

To avoid messing up our system packages, we’ll first create a Python virtual environment. This keeps our dependencies isolated and prevents conflicts with system-wide Python packages.

Navigate to your project directory and create a virtual environment:

cd ~/RAG-Tutorial
python3 -m venv venv

Now, activate the virtual environment:

source venv/bin/activate

Once activated, your terminal prompt should change to indicate that you are now inside the virtual environment.

With the virtual environment activated, install the necessary Python packages using requirements.txt:

pip install -r requirements.txt
Tuning Local LLMs With RAG Using Ollama and Langchain

This will install all the required dependencies for our RAG pipeline, including Flask, LangChain, Ollama, and Pydantic.

Once installed, you’re all set to proceed with the next steps!

Project structure

Our project is structured as follows:

RAG-Tutorial/
│── app.py              # Main Flask server
│── embed.py            # Handles document embedding
│── query.py            # Handles querying the vector database
│── get_vector_db.py    # Manages ChromaDB instance
│── .env                # Stores environment variables
│── requirements.txt    # List of dependencies
└── _temp/              # Temporary storage for uploaded files

Step 1: Creating app.py (Flask API Server)

This script sets up a Flask server with two endpoints:

  • /embed – Uploads a PDF and stores its embeddings in ChromaDB.
  • /query – Accepts a user query and retrieves relevant text chunks from ChromaDB.
  • route_embed(): Saves an uploaded file and embeds its contents in ChromaDB.
  • route_query(): Accepts a query and retrieves relevant document chunks.
import os
from dotenv import load_dotenv
from flask import Flask, request, jsonify
from embed import embed
from query import query
from get_vector_db import get_vector_db

load_dotenv()
TEMP_FOLDER = os.getenv('TEMP_FOLDER', './_temp')
os.makedirs(TEMP_FOLDER, exist_ok=True)

app = Flask(__name__)

@app.route('/embed', methods=['POST'])
def route_embed():
    if 'file' not in request.files:
        return jsonify({"error": "No file part"}), 400
    file = request.files['file']
    if file.filename == '':
        return jsonify({"error": "No selected file"}), 400
    embedded = embed(file)
    return jsonify({"message": "File embedded successfully"}) if embedded else jsonify({"error": "Embedding failed"}), 400

@app.route('/query', methods=['POST'])
def route_query():
    data = request.get_json()
    response = query(data.get('query'))
    return jsonify({"message": response}) if response else jsonify({"error": "Query failed"}), 400

if __name__ == '__main__':
    app.run(host="0.0.0.0", port=8080, debug=True)

Step 2: Creating embed.py (embedding documents)

This file handles document processing, extracts text, and stores vector embeddings in ChromaDB.

  • allowed_file(): Ensures only PDFs are processed.
  • save_file(): Saves the uploaded file temporarily.
  • load_and_split_data(): Uses UnstructuredPDFLoader and RecursiveCharacterTextSplitter to extract text and split it into manageable chunks.
  • embed(): Converts text chunks into vector embeddings and stores them in ChromaDB.
import os
from datetime import datetime
from werkzeug.utils import secure_filename
from langchain_community.document_loaders import UnstructuredPDFLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter
from get_vector_db import get_vector_db

TEMP_FOLDER = os.getenv('TEMP_FOLDER', './_temp')

def allowed_file(filename):
    return filename.lower().endswith('.pdf')

def save_file(file):
    filename = f"{datetime.now().timestamp()}_{secure_filename(file.filename)}"
    file_path = os.path.join(TEMP_FOLDER, filename)
    file.save(file_path)
    return file_path

def load_and_split_data(file_path):
    loader = UnstructuredPDFLoader(file_path=file_path)
    data = loader.load()
    text_splitter = RecursiveCharacterTextSplitter(chunk_size=7500, chunk_overlap=100)
    return text_splitter.split_documents(data)

def embed(file):
    if file and allowed_file(file.filename):
        file_path = save_file(file)
        chunks = load_and_split_data(file_path)
        db = get_vector_db()
        db.add_documents(chunks)
        db.persist()
        os.remove(file_path)
        return True
    return False

Step 3: Creating query.py (Query processing)

It retrieves relevant information from ChromaDB and uses an LLM to generate responses.

  • get_prompt(): Creates a structured prompt for multi-query retrieval.
  • query(): Uses Ollama's LLM to rephrase the user query, retrieve relevant document chunks, and generate a response.
import os
from langchain_community.chat_models import ChatOllama
from langchain.prompts import ChatPromptTemplate, PromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough
from langchain.retrievers.multi_query import MultiQueryRetriever
from get_vector_db import get_vector_db

LLM_MODEL = os.getenv('LLM_MODEL')
OLLAMA_HOST = os.getenv('OLLAMA_HOST', 'http://localhost:11434')

def get_prompt():
    QUERY_PROMPT = PromptTemplate(
        input_variables=["question"],
        template="""You are an AI assistant. Generate five reworded versions of the user question
        to improve document retrieval. Original question: {question}""",
    )
    template = "Answer the question based ONLY on this context:\n{context}\nQuestion: {question}"
    prompt = ChatPromptTemplate.from_template(template)
    return QUERY_PROMPT, prompt

def query(input):
    if input:
        llm = ChatOllama(model=LLM_MODEL)
        db = get_vector_db()
        QUERY_PROMPT, prompt = get_prompt()
        retriever = MultiQueryRetriever.from_llm(db.as_retriever(), llm, prompt=QUERY_PROMPT)
        chain = ({"context": retriever, "question": RunnablePassthrough()} | prompt | llm | StrOutputParser())
        return chain.invoke(input)
    return None

Step 4: Creating get_vector_db.py (Vector database management)

It initializes and manages ChromaDB, which stores text embeddings for fast retrieval.

  • get_vector_db(): Initializes ChromaDB with the Nomic embedding model and loads stored document vectors.
import os
from langchain_community.embeddings import OllamaEmbeddings
from langchain_community.vectorstores.chroma import Chroma

CHROMA_PATH = os.getenv('CHROMA_PATH', 'chroma')
COLLECTION_NAME = os.getenv('COLLECTION_NAME')
TEXT_EMBEDDING_MODEL = os.getenv('TEXT_EMBEDDING_MODEL')
OLLAMA_HOST = os.getenv('OLLAMA_HOST', 'http://localhost:11434')

def get_vector_db():
    embedding = OllamaEmbeddings(model=TEXT_EMBEDDING_MODEL, show_progress=True)
    return Chroma(collection_name=COLLECTION_NAME, persist_directory=CHROMA_PATH, embedding_function=embedding)

Step 5: Environment variables

Create .env, to store environment variables:

TEMP_FOLDER = './_temp'
CHROMA_PATH = 'chroma'
COLLECTION_NAME = 'rag-tutorial'
LLM_MODEL = 'smollm:360m'
TEXT_EMBEDDING_MODEL = 'nomic-embed-text'
  • TEMP_FOLDER: Stores uploaded PDFs temporarily.
  • CHROMA_PATH: Defines the storage location for ChromaDB.
  • COLLECTION_NAME: Sets the ChromaDB collection name.
  • LLM_MODEL: Specifies the LLM model used for querying.
  • TEXT_EMBEDDING_MODEL: Defines the embedding model for vector storage.
Tuning Local LLMs With RAG Using Ollama and Langchain
I'm using these light weight LLMs for this tutorial, as I don't have dedicated GPU to inference large models. | You can edit your LLMs in the .env file

Testing the makeshift RAG + LLM Pipeline

Now that our RAG app is set up, we need to validate its effectiveness. The goal is to ensure that the system correctly:

  1. Embeds documents – Converts text into vector embeddings and stores them in ChromaDB.
  2. Retrieves relevant chunks – Fetches the most relevant text snippets from ChromaDB based on a query.
  3. Generates meaningful responses – Uses Ollama to construct an intelligent response based on retrieved data.

This testing phase ensures that our makeshift RAG pipeline is functioning as expected and can be fine-tuned if necessary.

Running the flask server

We first need to make sure our Flask app is running. Open a terminal, navigate to your project directory, and activate your virtual environment:

cd ~/RAG-Tutorial
source venv/bin/activate  # On Linux/macOS
# or
venv\Scripts\activate  # On Windows (if using venv)

Now, run the Flask app:

python3 app.py

If everything is set up correctly, the server should start and listen on http://localhost:8080. You should see output like:

Tuning Local LLMs With RAG Using Ollama and Langchain

Once the server is running, we'll use curl commands to interact with our pipeline and analyze the responses to confirm everything works as expected.

1. Testing Document Embedding

The first step is to upload a document and ensure its contents are successfully embedded into ChromaDB.

curl --request POST \
  --url http://localhost:8080/embed \
  --header 'Content-Type: multipart/form-data' \
  --form file=@/path/to/file.pdf

Breakdown:

  • curl --request POST → Sends a POST request to our API.
  • --url http://localhost:8080/embed → Targets our embed endpoint running on port 8080.
  • --header 'Content-Type: multipart/form-data' → Specifies that we are uploading a file.
  • --form file=@/path/to/file.pdf → Attaches a file (in this case, a PDF) to be processed.

Expected Response:

Tuning Local LLMs With RAG Using Ollama and Langchain

What’s Happening Internally?

  1. The server reads the uploaded PDF file.
  2. The text is extracted, split into chunks, and converted into vector embeddings.
  3. These embeddings are stored in ChromaDB for future retrieval.

If Something Goes Wrong:

IssuePossible CauseFix
"status": "error"File not found or unreadableCheck the file path and permissions
collection.count() == 0ChromaDB storage failureRestart ChromaDB and check logs

2. Querying the Document

Now that our document is embedded, we can test whether relevant information is retrieved when we ask a question.

curl --request POST \
  --url http://localhost:8080/query \
  --header 'Content-Type: application/json' \
  --data '{ "query": "Question about the PDF?" }'

Breakdown:

  • curl --request POST → Sends a POST request.
  • --url http://localhost:8080/query → Targets our query endpoint.
  • --header 'Content-Type: application/json' → Specifies that we are sending JSON data.
  • --data '{ "query": "Question about the PDF?" }' → Sends our search query to retrieve relevant information.

Expected Response:

Tuning Local LLMs With RAG Using Ollama and Langchain

What’s Happening Internally?

  1. The query "Whats in this file?" is passed to ChromaDB to retrieve the most relevant chunks.
  2. The retrieved chunks are passed to Ollama as context for generating a response.
  3. Ollama formulates a meaningful reply based on the retrieved information.

If the Response is Not Good Enough:

IssuePossible CauseFix
Retrieved chunks are irrelevantPoor chunking strategyAdjust chunk sizes and retry embedding
"llm_response": "I don't know"Context wasn't passed properlyCheck if ChromaDB is returning results
Response lacks document detailsLLM needs better instructionsModify the system prompt

3. Fine-tuning the LLM for better responses

If Ollama’s responses aren’t detailed enough, we need to refine how we provide context.

Tuning strategies:

  1. Improve Chunking – Ensure text chunks are large enough to retain meaning but small enough for effective retrieval.
  2. Enhance Retrieval – Increase n_results to fetch more relevant document chunks.
  3. Modify the LLM Prompt – Add structured instructions for better responses.

Example system prompt for Ollama:

prompt = f"""
You are an AI assistant helping users retrieve information from documents.
Use the following document snippets to provide a helpful answer.
If the answer isn't in the retrieved text, say 'I don't know.'

Retrieved context:
{retrieved_chunks}

User's question:
{query_text}
"""

This ensures that Ollama:

    • Uses retrieved text properly.
    • Avoids hallucinations by sticking to available context.
    • Provides meaningful, structured answers.

Final thoughts

Building this makeshift RAG LLM tuning pipeline has been an insightful experience, but I want to be clear, I’m not an AI expert. Everything here is something I’m still learning myself.

There are bound to be mistakes, inefficiencies, and things that could be improved. If you’re someone who knows better or if I’ve missed any crucial points, please feel free to share your insights.

That said, this project gave me a small glimpse into how RAG works. At its core, RAG is about fetching the right context before asking an LLM to generate a response.

It’s what makes AI chatbots capable of retrieving information from vast datasets instead of just responding based on their training data.

Large companies use this technique at scale, processing massive amounts of data, fine-tuning their models, and optimizing their retrieval mechanisms to build AI assistants that feel intuitive and knowledgeable.

What we built here is nowhere near that level, but it was still fascinating to see how we can direct an LLM’s responses by controlling what information it retrieves.

Even with this basic setup, we saw how much impact retrieval quality, chunking strategies, and prompt design have on the final response.

This makes me wonder, have you ever thought about training your own LLM? Would you be interested in something like this but fine-tuned specifically for Linux tutorials?

Imagine a custom-tuned LLM that could answer your Linux questions with accurate, RAG-powered responses, would you use it? Let us know in the comments!

by: Sreenath
Sat, 19 Apr 2025 13:00:24 GMT


Exploring Pages, Links, Tags, and Block References in Logseq

Simply creating well-formatted notes isn’t enough to manage the information you collect in daily life—accessibility is key.

If you can't easily retrieve that information and its context, the whole point of "knowledge management" falls apart.

From my experience using it daily for several months, I’d say Logseq does a better job of interlinking notes than any other app I’ve tried.

So, without further ado, let’s dive in.

If you’ve used Logseq before, you’ve likely noticed one key thing: everything is a block. Your data is structured as intentional, individual blocks. When you type a sentence and hit Enter, instead of just creating a new line, Logseq starts a new bullet point.

This design brings both clarity and complexity.

In Logseq, pages are made up of bullet-formatted text. Each page acts like a link—and when you search for a page that doesn’t exist, Logseq simply creates it for you.

Here’s the core idea: pages and tags function in a very similar way. You can think of a tag as a special kind of page that collects links to all content marked with that tag. For a deeper dive into this concept, I recommend checking out this forum post.

Logseq also supports block references, which let you link directly to any specific block—meaning you can reference a single sentence from one note in another.

📋
Ultimately, it is the end-user's creativity that creates a perfect content organization. There is no one way of using Logseq for knowledge management. It's up to you how you use it.

Creating a new page in Logseq

Click on the top-left search icon. This will bring a search overlay. Here, enter the name of the page you want to create.

If no such page is present, you will get an option to create a new page.

Exploring Pages, Links, Tags, and Block References in Logseq
Search for a note

For example, I created a page called "My Logseq Notes" and you can see this newly created page in 'All pages' tab on Logseq sidebar.

Exploring Pages, Links, Tags, and Block References in Logseq
New page listed in "All Pages" tab

Logseq stores all the created page in the pages directory inside the Logseq folder you have chosen on your system.

Exploring Pages, Links, Tags, and Block References in Logseq
The Logseq pages directory in File Manager

There won't be any nested directories to store sub-pages. All those things will be done using links and tags. In fact, there is no point to look into the Logseq directory manually. Use the app interface, where the data will appear organized.

⌨️ Use keyboard shortcut for creating pages

Powerful tools like Logseq are better used with keyboard. You can create pages/links/references using only keyboard, without touching the mouse.

The common syntax to create a page or link in Logseq is:

#One-word-page-name

You can press the # symbol and enter a one word name. If there are no pages with the name exists, a new page is created. Else, link to the mentioned page is added.

If you need to create a page with multiple words, use:

#[[Page with multiple words separated with space]]

Place the name of the note within two [[]] symbol.

0:00
/0:32

Create pages with single word name or multi-word names.

Using Tags

In the example above, I have created two pages, one without spaces in the name, while the other has spaces.

Both of them can be considered as tags.

Confused? The further interlinking of these pages actually defines if it's a page or a tag.

If you are using it as a 'special page' to accumulate similar contents, then it can be considered as a tag. If you are filling paragraphs of text inside it, then it will be a regular page.

Basically, a tag-page is also a page but it has the links to all the pages marked with the said tag.

To add a tag to a particular note, you can type #<tag-name> anywhere in the note. For convenience and better organization, you can add at the end of the note.

Exploring Pages, Links, Tags, and Block References in Logseq
Adding Simple Tags

Linking to a page

Creating a new page and adding a link to an existing page is the same process in Logseq. You have seen it above.

If you press the [[]] and type a name, if that name already exists, a link to that page is created. Else, a new page is created.

In the short video below, you can see the process of linking a note in another note.

0:00
/0:22

Adding link to a page in Logseq in another note.

Desktop Linux is mostly neglected by the industry but loved by the community. For the past 12 years, It's FOSS has been helping people use Linux on their personal computers. And we are now facing the existential threat from AI models stealing our content.

If you like what we do and would love to support our work, please become It's FOSS Plus member. It costs $24 a year (less than the cost of a burger meal each month) and you get an ad-free reading experience with the satisfaction of helping the desktop Linux community.

Join It's FOSS Plus

Referencing a block

The main flexibility of Logseq lies in the linking of individual blocks. In each note, you have a parent node, then child nodes and grand-child nodes. These are distinguished by the indentation it has.

So, in the case of block referencing, you should take utmost care in properly adding indent to the note blocks.

Now, type ((. A search box will appear above the cursor. Start typing something, and it will highlight the matching block anywhere in Logseq.

0:00
/0:29

Referencing a block inside a note. The block we are adding is part of another note.

Similarly, you can right-click on a node and select "Copy block ref" to copy the reference code for that block.

Exploring Pages, Links, Tags, and Block References in Logseq
Copy Block Reference

Now, if you paste this on other note, the main node content is pasted and the rest of that block (intended contents) will be visible on hover.

Exploring Pages, Links, Tags, and Block References in Logseq
Hover over reference for preview
💡
Instead of the "Copy block ref", you can also choose "Copy block embed" and then paste the embed code. This will paste the whole block in the area where you pasted the embed code.

Once you have the block reference code, you can use it as a URL to link to a particular word, instead of pasting raw in a line. To do that, use the Markdown link syntax:

[This is a link to the block](reference code of the block)

For example:

[This is a link to the block](((679b6c26-2ce9-48f2-be6a-491935b314a6)))

So, when you hover over the text, the referenced content is previewed.

Exploring Pages, Links, Tags, and Block References in Logseq
Reference as Markdown Hyperlink

Now that you have the basic building blocks, you can start organizing your notes into a proper knowledge base.

In the next tutorial of this series, I'll discuss how you can use plugins and themes to customize Logseq.

by: Abhishek Prakash
Thu, 17 Apr 2025 06:27:20 GMT


FOSS Weekly #25.16: Ubuntu 25.04, Fedora 42, ParticleOS and a Lot More Linux Stuff

It's the release week. Fedora 42 is already out. Ubuntu 25.04 will be releasing later today along with its flavors like Kubuntu, Xubuntu, Lubuntu etc.

In the midst of these two heavyweights, MX Linux and Manjaro also quickly released newer versions. For Manjaro, it is more of an ISO refresh, as it is a rolling release distribution.

Overall, a happening week for Linux lovers 🕺

💬 Let's see what else you get in this edition

  • Arco Linux bids farewell.
  • Systemd working on its own Linux distro.
  • Looking at the origin of UNIX.
  • And other Linux news, tips, and, of course, memes!
  • This edition of FOSS Weekly is supported by Aiven.

❇️ Aiven for ClickHouse® - The Fastest Open Source Analytics Database, Fully Managed

ClickHouse processes analytical queries 100-1000x faster than traditional row-oriented systems. Aiven for ClickHouse® gives you the lightning-fast performance of ClickHouse–without the infrastructure overhead.

Just a few clicks is all it takes to get your fully managed ClickHouse clusters up and running in minutes. With seamless vertical and horizontal scaling, automated backups, easy integrations, and zero-downtime updates, you can prioritize insights–and let Aiven handle the infrastructure.

Managed ClickHouse database | Aiven
Aiven for ClickHouse® – fully managed, maintenance-free data warehouse ✓ All-in-one open source cloud data platform ✓ Try it for free
FOSS Weekly #25.16: Ubuntu 25.04, Fedora 42, ParticleOS and a Lot More Linux Stuff

📰 Linux and Open Source News

ParticleOS is Systemd's attempt at a Linux distribution.

ParticleOS: Systemd’s Very Own Linux Distro in Making
A Linux distro from systemd? Sounds interesting, right?
FOSS Weekly #25.16: Ubuntu 25.04, Fedora 42, ParticleOS and a Lot More Linux Stuff

🧠 What We’re Thinking About

Linus Torvalds was told that Git is more popular than Linux.

Git is More Popular than Linux: Torvalds
Linus Torvalds reflects on 20 years of Git.
FOSS Weekly #25.16: Ubuntu 25.04, Fedora 42, ParticleOS and a Lot More Linux Stuff

🧮 Linux Tips, Tutorials and More

Desktop Linux is mostly neglected by the industry but loved by the community. For the past 12 years, It's FOSS has been helping people use Linux on their personal computers. And we are now facing the existential threat from AI models stealing our content.

If you like what we do and would love to support our work, please become It's FOSS Plus member. It costs $24 a year (less than the cost of a burger meal each month) and you get an ad-free reading experience with the satisfaction of helping the desktop Linux community.

Join It's FOSS Plus

👷 Homelab and Maker's Corner

These 28 cool Raspberry Pi Zero W projects will keep you busy.

28 Super Cool Raspberry Pi Zero W Project Ideas
Wondering what to do with your Raspberry Pi Zero W? Here are a bunch of project ideas you can spend some time on and satisfy your DIY craving.
FOSS Weekly #25.16: Ubuntu 25.04, Fedora 42, ParticleOS and a Lot More Linux Stuff

✨ Apps Highlight

You can download YouTube videos using Seal on Android.

Seal: A Nifty Open Source Android App to Download YouTube Video and Audio
Download YouTube video/music (for educational purpose or with consent) with this little, handy Android app.
FOSS Weekly #25.16: Ubuntu 25.04, Fedora 42, ParticleOS and a Lot More Linux Stuff

📽️ Videos I am Creating for You

See the new features of Ubuntu 25.04 in action in this video.

🧩 Quiz Time

Our Guess the Desktop Environment Crossword will test your knowledge.

Guess the Desktop Environment: Crossword
Test your desktop Linux knowledge with this simple crossword puzzle. Can you solve it all correctly?
FOSS Weekly #25.16: Ubuntu 25.04, Fedora 42, ParticleOS and a Lot More Linux Stuff

Alternatively, guess all of these open source privacy tools correctly?

Know The Best Open-Source Privacy Tools
Do you utilize open-source tools for privacy?
FOSS Weekly #25.16: Ubuntu 25.04, Fedora 42, ParticleOS and a Lot More Linux Stuff

💡 Quick Handy Tip

You can make Thunar open a new tab instead of a new window. This is good in situations when opening a folder from other apps, like a web browser. This reduces screen clutter.

FOSS Weekly #25.16: Ubuntu 25.04, Fedora 42, ParticleOS and a Lot More Linux Stuff

First, click on EditPreferences. Here, go to the Behavior tab. Now, under "Tabs and Windows", enable the first checkbox as shown above or all three if you need the functionality of the other two.

🤣 Meme of the Week

We are generally a peaceful bunch, for the most part. 🫣

FOSS Weekly #25.16: Ubuntu 25.04, Fedora 42, ParticleOS and a Lot More Linux Stuff

🗓️ Tech Trivia

On April 16, 1959, John McCarthy publicly introduced LISP, a programming language for AI that emphasized symbolic computation. This language remains influential in AI research today.

🧑‍🤝‍🧑 FOSSverse Corner

FOSSers are discussing VoIP, do you have any insights to add here?

A discussion over Voice Over Internet Protocol (VoIP)
I live in a holiday village where we have several different committees and meetings, for those not present to attend the meetings we do video conférences using voip. A few years back the prefered system was skype, we changed to whatsapp last year as we tend to use its messaging facilities and its free. We have a company who manages our accounts, they prefer using teams, paid for version as they can invoice us for its use … typical accountant. My question, does it make any difference in band w…
FOSS Weekly #25.16: Ubuntu 25.04, Fedora 42, ParticleOS and a Lot More Linux Stuff

❤️ With love

Share it with your Linux-using friends and encourage them to subscribe (hint: it's here).

Share the articles in Linux Subreddits and community forums.

Follow us on Google News and stay updated in your News feed.

Opt for It's FOSS Plus membership and support us 🙏

Enjoy FOSS 😄

by: Abhishek Kumar
Tue, 15 Apr 2025 05:41:55 GMT


11 Vibe Coding Tools to 10x Your Development on Linux Desktop

Once upon a time, coding meant sitting down, writing structured logic, and debugging for hours.

Fast-forward to today, and we have Vibe Coding, a trend where people let AI generate entire chunks of code based on simple prompts. No syntax, no debugging, no real understanding of what’s happening under the hood. Just vibes.

Coined by OpenAI co-founder Andrej Karpathy, Vibe Coding is the act of developing software by giving natural language instructions to AI and accepting whatever it spits out.

11 Vibe Coding Tools to 10x Your Development on Linux Desktop
Source : X

Some people even take it a step further by using voice-to-text tools so they don’t have to type at all. Just describe your dream app, and boom, the AI makes it for you. Or does it?

People are building full-fledged SaaS products in days, launching MVPs overnight, and somehow making more money than seasoned engineers who swear by Agile methodologies.

And here I am, writing about them instead of cashing in myself. Life isn’t fair, huh?

But don’t get me wrong, I’m not here to hate. I’m here to expand on this interesting movement and hand you the ultimate arsenal to embrace vibe coding with these tools.

Non-FOSS Warning! Some of the applications mentioned here may not be open source. They have been included in the context of Linux usage. Also, some tools provide interface for popular, commercial LLMs like ChatGPT and Claude.

1. Aider - AI pair programming in your terminal

11 Vibe Coding Tools to 10x Your Development on Linux Desktop

Aider is the perfect choice if you're looking for a pair programmer to help you ship code faster. It allows you to pair programs with LLMs to edit code in your local GitHub repository. You can start a new project or work with an existing GitHub repo—all from your terminal.

Key Features

✅ Aider works best with Claude 3.7 Sonnet, DeepSeek R1 & Chat V3, OpenAI o1, o3-mini & GPT-4o, but can connect to almost any LLM, including local models.
✅ Aider makes a map of your entire codebase, which helps it work well in larger projects.
✅ Supports most popular programming languages: Python, JavaScript, Rust, Ruby, Go, C++, PHP, HTML, CSS, and more.
✅ Automatically commits changes with sensible commit messages. Use familiar Git tools to easily diff, manage, and undo AI changes.
✅ Use Aider from within your favorite IDE or editor. Ask for changes by adding comments to your code, and Aider will get to work.
✅ Add images and web pages to the chat to provide visual context, screenshots, and reference docs.
✅ Automatically lint and test your code every time Aider makes changes. It can fix problems detected by linters and test suites.
✅ Works best with LLM APIs but also supports web chat interfaces, making copy-pasting code seamless.

2. VannaAI - Chat with SQL Database

11 Vibe Coding Tools to 10x Your Development on Linux Desktop

Writing SQL queries can be tedious, but VannaAI changes that by letting you interact with SQL databases using natural language.

Instead of manually crafting queries, you describe what you need, and VannaAI generates the SQL for you.

It Works in two steps, Train a RAG "model" on your data and then ask questions that return SQL queries.

Key Features

✅ Out-of-the-box support for Snowflake, BigQuery, Postgres, and more.
✅ The Vanna Python package and frontend integrations are all open-source, allowing deployment on your infrastructure.
✅ Database contents are never sent to the LLM unless explicitly enabled.
✅ Improves continuously by augmenting training data.
✅ Use Vanna in Jupyter Notebooks, Slackbots, web apps, Streamlit apps, or even integrate it into your own web app.

VannaAI makes querying databases as easy as having a conversation, making it a game-changer for both technical and non-technical users.

3. All Hands - Open source agents for developers

11 Vibe Coding Tools to 10x Your Development on Linux Desktop

All Hands is an open-source platform for AI developer agents, capable of building projects, adding features, debugging, and more.

Competing with Devin, All Hands recently topped the SWE-bench leaderboard with 53% accuracy.

Key Features

✅ Use All Hands via an interactive GUI, command-line interface (CLI), or non-interactive modes like headless execution and GitHub Actions.
✅ Open-source freedom, built under the MIT license to ensure AI technology remains accessible to all.
✅ Handles complex tasks, from code generation to debugging and issue fixing.
✅ Developed in collaboration with AI safety experts like Invariant Labs to balance innovation and security.

To get started, install Docker 26.0.0+ and run OpenHands using the provided Docker commands. Once running, configure your LLM provider and start coding with AI-powered assistance.

4. Continue - Leading AI-powered code assistant

11 Vibe Coding Tools to 10x Your Development on Linux Desktop

You must have heard about Cursor IDE, the popular AI-powered IDE; Continue is similar to it but open source under Apache license.

It is highly customizable and lets you add any language model for auto-completion or chat. This can immensely improve your productivity. You can add Continue to VScode and JetBrains.

Key Features

✅ Continue autocompletes single lines or entire sections of code in any programming language as you type.
✅ Attach code or other context to ask questions about functions, files, the entire codebase, and more.
✅ Select code sections and press a keyboard shortcut to rewrite code from natural language.
✅ Works with Ollama, OpenAI, Together, Anthropic, Mistral, Azure OpenAI Service, and LM Studio.
✅ Codebase, GitLab Issues, Documentation, Methods, Confluence pages, Files.
✅ Data blocks, Docs blocks, Rules blocks, MCP blocks, Prompts blocks.

5. Wave - Terminal with local LLMs

11 Vibe Coding Tools to 10x Your Development on Linux Desktop

Wave terminal introduces BYOLLM (Bring Your Own Large Language Model), allowing users to integrate their own local or cloud-based LLMs into their workflow.

It currently supports local LLM providers such as Ollama, LM Studio, llama.cpp, and LocalAI while also enabling the use of any OpenAI API-compatible model.

Key Features

✅ Use local or cloud-based LLMs, including OpenAI-compatible APIs.
✅ Seamlessly integrate LLM-powered responses into your terminal workflow.
✅ Set the AI Base URL and AI Model in the settings or via CLI.
✅ Plans to include support for commercial models like Gemini and Claude.

6. Warp terminal - Agent mode (not open source)

11 Vibe Coding Tools to 10x Your Development on Linux Desktop

After WaveTerm, we have another amazing contender in the AI-powered terminal space, Warp Terminal. I personally use this so I may sound biased. 😛

It’s essentially an AI-powered assistant that can understand natural language, execute commands, and troubleshoot issues interactively.

Instead of manually looking up commands or switching between documentation, you can simply describe the task in English and let Agent Mode guide you through it.

Key Features

✅ No need to remember complex CLI commands, just type what you want, like "Set up an Nginx reverse proxy with SSL", and Agent Mode will handle the details.
✅ Ran into a “port 3000 already in use” error? Just type "fix it", and Warp will suggest running kill $(lsof -t -i:3000). If that doesn’t work, it’ll refine the approach automatically.
✅ Works seamlessly with Git, AWS, Kubernetes, Docker, and any other tool with a CLI. If it doesn’t know a command, you can tell it to read the help docs, and it will instantly learn how to use the tool.
✅ Warp doesn’t send anything to the cloud without your permission. You approve each command before it runs, and it only reads outputs when explicitly allowed.

It seems like Warp is moving from a traditional AI-assisted terminal to an interactive AI-powered shell, making the command line much more intuitive.

Would you consider switching to it, or do you think this level of automation might be risky for some tasks?

7. Pieces : AI extension to IDE (not open source)

11 Vibe Coding Tools to 10x Your Development on Linux Desktop

Pieces isn’t a code editor itself, it’s an AI-powered extension that supercharges editors like VS Code, Sublime Text, Neovim and many more IDE's with real-time intelligence and memory.

Its highlighted feature is Long-Term Memory Agent that captures up to 9 months of coding context, helping you seamlessly resume work, even after a long break.

Everything runs locally for full privacy. It understands your code, recalls snippets, and blends effortlessly into your dev tools to eliminate context switching.

Bonus: it’s free for now, with a free tier promised forever, but they will start charging soon, so early access might come with perks.

Key Features

✅ Stores 9 months of local coding context
✅ Integrates with Neovim, VS Code, and Sublime Text
✅ Fully on-device AI with zero data sharing
✅ Context-aware suggestions via Pieces Copilot
✅ Organize and share snippets using Pieces Drive
✅ Always-free tier promised, with early adopter perks

8. Aidermacs: AI aided coding in Emacs

11 Vibe Coding Tools to 10x Your Development on Linux Desktop

Aidermacs by is for the Emacs power users who want that sweet Cursor-style AI experience; but without leaving their beloved terminal.

It’s a front-end for the open-source Aider, bringing powerful pair programming into Emacs with full respect for its workflows and philosophy.

Whether you're using GPT-4, Claude, or even DeepSeek, Aidermacs auto-detects your available models and lets you chat with them directly inside Emacs.

And yes, it's deeply customizable, as all good Emacs things should be.

Key Features

✅ Integrates Aider into Emacs for collaborative coding
✅ Intelligent model selection from OpenAI, Anthropic, Gemini, and more
✅ Built-in Ediff for side-by-side AI-generated changes
✅ Fine-grained file control: edit, read-only, scratchpad, and external
✅ Fully theme-aware with Emacs-native UI integration
✅ Works well in terminal via vterm with theme-based colors

9. Jeddict AI Assistant

11 Vibe Coding Tools to 10x Your Development on Linux Desktop

This one is for my for the Java folks, It’s a plugin for Apache NetBeans. I remember using NetBeans back in school, and if this AI stuff was around then, I swear I would've aced my CS practicals.

This isn’t your average autocomplete tool. Jeddict AI Assistant brings full-on AI integration into your IDE: smarter code suggestions, context-aware documentation, SQL query help, even commit messages.

It's especially helpful if you're dealing with big Java projects and want AI that actually understands what’s going on in your code.

Key Features

✅ Smart, inline code completions using OpenAI, DeepSeek, Mistral, and more
✅ AI chat with full awareness of project/class/package context
✅ Javadoc creation & improvement with a single shortcut
✅ Variable renaming, method refactoring, and grammar fixes via AI hints
✅ SQL query assistance & inline completions in the database panel
✅ Auto-generated Git commit messages based on your diffs
✅ Custom rules, file context preview, and experimental in-editor updates
✅ Fully customizable AI provider settings (supports LM Studio, Ollama, GPT4All too!)

10. Amazon CodeWhisperer

11 Vibe Coding Tools to 10x Your Development on Linux Desktop

If your coding journey revolves around AWS services, then Amazon CodeWhisperer might be your ideal AI-powered assistant.

While it works like other AI coding tools, its real strength lies in its deep integration with AWS SDKs, Lambda, S3, and DynamoDB.

CodeWhisperer is fine-tuned for cloud-native development, making it a go-to choice for developers building serverless applications, microservices, and infrastructure-as-code projects.

Since it supports Visual Studio Code and JetBrains IDEs, AWS developers can seamlessly integrate it into their workflow and get AWS-specific coding recommendations that follow best practices for scalability and security.

Plus, individual developers get free access, making it an attractive option for solo builders and startup developers.

Key Features

✅ Optimized code suggestions for AWS SDKs and cloud services.
✅ Built-in security scanning to detect vulnerabilities.
✅ Supports Python, Java, JavaScript, and more.
✅ Free for individual developers.

11. Qodo AI (previously Codium)

11 Vibe Coding Tools to 10x Your Development on Linux Desktop

If you’ve ever been frustrated by the limitations of free AI coding tools, qodo might be the answer.

Supporting over 50 programming languages, including Python, Java, C++, and TypeScript, qodo integrates smoothly with Visual Studio Code, IntelliJ, and JetBrains IDEs.

It provides intelligent autocomplete, function suggestions, and even code documentation generation, making it a versatile tool for projects of all sizes.

While it may not have some of the advanced features of paid alternatives, its zero-cost access makes it a game-changer for budget-conscious developers.

Key Features

✅ Unlimited free code completions with no restrictions.
✅ Supports 50+ programming languages, including Python, Java, and TypeScript.
✅ Works with popular IDEs like Visual Studio Code and JetBrains.
✅ Lightweight and responsive, ensuring a smooth coding experience.

Final thoughts

📋
I deliberately skipped IDEs from this list. I have a separate list of editors for vibe coding on Linux.

With time, we’re undoubtedly going to see more AI-assisted coding take center stage. As Anthropic CEO Dario Amodei puts it, AI will write 90% of code within six months and could automate software development entirely within a year.

Whether that’s an exciting leap forward or a terrifying thought depends on how much you trust your AI pair programmer.

If you’re diving into these tools, I highly recommend brushing up on the basics of coding and version control.

AI can write commands for you, but if you don’t know what it’s doing, you might go from “I just built the next billion-dollar SaaS!” to “Why did my AI agent just delete my entire codebase?” in a matter of seconds.

11 Vibe Coding Tools to 10x Your Development on Linux Desktop
X

That said, this curated list of amazing open-source tools should get you started. Whether you're a seasoned developer or just someone who loves typing cool things into a terminal, these tools will level up your game.

Just remember: the AI can vibe with you, but at the end of the day, you're still the DJ of your own coding playlist (sorry for the cringy line 👉👈).

Blogger

Birth of Unix

by: John Paul Wohlscheid
Sun, 13 Apr 2025 14:34:36 GMT


Birth of Unix

Sometimes it feels like Unix has been around forever, at least to users who have used Linux, or BSD in any form for a decade or more now.

Its ideals laid the groundwork for Linux, and it underpins macOS. A modern version (FreeBSD) is used on thousands of servers while Linux rules the server space along with the super computer industry.

Even though the original form of it is a history, it remains a significant development to help start Linux and more.

But initially, it had a rocky start and had to be developed in secret.

Punch Cards and Multics

Birth of Unix

Back in the days when computers took up whole rooms, the main method of using computers was the punch card interface. Computers didn't come with an operating system, they had a programming language built into them. If you wanted to run a program, you had to use a device to enter your program and the data on a series of punch cards.

According to an interview with Brian Kernighan, one of the Unix creators, "So if you had a 1,000-line program, you would have 1,000 cards. There were no screens, no interactive output. You gave your cards to the computer operator and waited for your printout that was the result of your program."

Birth of Unix

At the time, all text output from these computers was capitalized. Kernighan wrote an application to handle the formatting of his thesis. "And so thesis was basically three boxes of cards, 6,000 cards in each box, probably weighed 10, 12 pounds, five kilograms. And so you’d take these three boxes, 1,000 cards of which the first half of the first box was the program and then the remaining 5,000 cards was the thesis. And you would take those three boxes and you’d hand them to the operator. And an hour or two or three later back would come a printed version of thesis again."

Needless to say, this makes modern thesis writing seem effortless, right?

In the late 1950s, AT&T, Massachusetts Institute of Technology, and General Electric created a project to revolutionize computing and push it beyond the punch card.

The project was named Multics or “Multiplexed Information and Computing Service”. According to the paper that laid out the plans for the project, there were nine major goals:

  • Convenient remote terminal use.
  • Continuous operation analogous to power & telephone services.
  • A wide range of system configurations, changeable without system or user program reorganization.
  • A high reliability internal file system.
  • Support for selective information sharing.
  • Hierarchical structures of information for system administration and decentralization of user activities.
  • Support for a wide range of applications.
  • Support for multiple programming environments & human interfaces.
  • The ability to evolve the system with changes in technology and in user aspirations.
Birth of Unix

Multics would be a time-sharing computer, instead of relying on punch cards. This means that users could log into the system via a terminal and use it for an allotted period of time. This would turn the computer from a system administered by a high priest class (Steven Levy mentioned this concept in his book Hackers.) to something that could be accessed by anyone with the necessary knowledge.

The project was very ambitious. Unfortunately, turning ideas into reality takes time. Bell Labs withdrew from the project in 1969. They had joined the project to get a time-sharing operating system for their employees, but there had been little progress.

The lessons learned from Multics eventually helped in the creation of Unix, more on that below.

To Space Beyond

Birth of Unix
Image Credits: Multicians / A team installing GE 645 mainframe in Paris

The Bell engineers who had worked on Multics (including Ken Thompson and Dennis Ritchie) were left without an operating system, but tons of ideas. In the last days of their involvement in the Multics, they had started writing an operating system on a GE-645 mainframe. But then the project ended, and they no longer needed the mainframe.

They lobbied their bosses to buy a mini-computer to start their own operating system project but were denied. They continued to work on the project in secret. Often they would get together and discuss what they would want in an operating system and sketch out ideas for the architecture.

During this time, Thompson started working on a little side project. He wrote a game for the GE-645 named Space Travel. The game "simulated all the major bodies in the solar system along with a spaceship that could fly around them".

Unfortunately, it was expensive to run on the mainframe. Each game cost $75 to play. So, Thompson went looking for a different, cheaper computer to use. He discovered a PDP-7 mini-computer left over from a previous project. He rewrote the game to run on the PDP-7.

Birth of Unix
PDP-7, Image Credits: Wikipedia

In the summer of 1969, Thompson's wife took their newborn son to visit her parents. Thompson took advantage of this time and newly learned programming skills to start writing an operating system for the PDP-7. Since he saw this new project as a cut-down version of Multics, he named it “Un-multiplexed Information and Computing Service," or Unics. It was eventually changed to Unix.

Other Bell Labs employees joined the project. The team quickly ran into limitations with the hardware itself. The PDP-7 was in its early stages, so they had to figure out how to get their hands on a newer computer. They knew that their bosses would never buy a new system because "lab's management wasn't about to allow any more research on operating systems."

At the time, Bell Labs produced lots of patents. According to Kernighan, "typically one or two a day at that point." It was time-consuming to create applications for those patents because the formatting required by the government was very specific.

At the time, there were no commercial word processing programs capable of handling the formatting. The Unix group offered to write a program for the patent department that would run on a shiny new PDP-11. They also promised to have it done before any commercial software would be available to do the same. Of course, they failed to mention that they would need to write an operating system for the software to run on.

Their bosses agreed to the proposal and placed an order for a PDP-11 in May 1970. The computer arrived quickly, but it took six months for the drives to arrive.

Birth of Unix
PD-11/70, Image Credits: Wikipedia

In the meantime, the team continued to write Unix on the PDP-7, making it the first platform where the first version of Unix developed. Once the PDP-11 was up and running, the team ported what they had to the new system. In short order, the new patent application software was unveiled to the patent department. It was a hit. The management was so pleased with the results, they bought the Unix team their own PDP-11.

Birth of Unix
Image Credits: Amazon

With a more powerful computer at their command, work on Unix continued. In 1971, the team released its first official manual: The UNIX Programmer's Manual. The operating system was officially debuted to the world via a paper presented at the 1973 symposium of the Association for Computing Machinery. This was followed by a flood of requests for copies.

This brought up new issues. AT&T, the company that financed Bell Labs, couldn't sell an operating system. In 1956, AT&T was forced by the US government to agree to a consent decree.

This consent decree prohibited AT&T from "selling products not directly related to telephones and telecommunications, in return for its legal monopoly status in running the country's long-distance phone service." The solution was to release "the Unix source code under license to anyone who asked, charging only a nominal fee".

The consent decree also prohibited AT&T from providing tech support. So, the code was essentially available as-is. This led to the creation of the first user groups as Unix adopters banded together to provide mutual assistance.

C Programming, The Necessary Catalyst

The creation of the C programming language by Dennis Ritchie at Bell Labs helped Unix make progress with its future versions, and indirectly influenced the ability to create BSD and Linux.

And, now, we have many programming languages, operating systems, including several variants of Linux, BSD, and Unix-like operating systems as well.

by: Sreenath
Fri, 11 Apr 2025 15:09:58 GMT


Formatting Text in Logseq

Logseq is a highly efficient note-taking and knowledge management app with decent Markdown support.

While using Logseq, one thing to keep in mind is that the text formatting isn't pure Markdown. This is because Logseq uses bullet blocks as the basic unit of content and also supports Org-mode.

Whenever you start a new document or press Enter after a sentence, a new block is created — and this block can be referenced from anywhere within Logseq. That’s part of what makes Logseq so powerful.

Still, formatting your notes clearly is just as important. In this article, we’ll take a closer look at how text formatting works in Logseq.

Basic Markdown syntax

As I said above, since Logseq supports Markdown, all the basic Markdown syntax will work here.

You remember the Markdown syntax, right?

Description Markdown Syntax
Six Levels of Heading # Level One
## Level Two
### Level Three
#### Level Four
##### Level Five
###### Level Six
Hyprlink [Link Text](Link Address/URL)
Image ![Image Caption](Image path)
Bold Text **Bold Text**
Italics Text *Italics*
Striked-out Text ~~Striked-out Text~~
In-line code `inline code`
Code block ```
code block
```
Table |Column Header|Column Header|
| ---------------- | ---------------|
| Items | Items |
Formatting Text in Logseq
Logseq Markdown Rendering
💡
You can press the / key to get all the available format options.

Adding quotes

Quotes can be added in Logseq using two methods.

First, using the traditional Markdown method of adding a quote by using > in front of the text.

> This should appear as a quote

Second, since Logseq has Org-mode support, you can create a quote block using the syntax:

#+BEGIN_QUOTE
Your Quote text here
#+END_QUOTE

You can access this by pressing < key and then typing Quote and enter.

🚧
If you are using the quotes with a preceding > syntax, then every markdown renderer will render the document properly. The org-mode syntax won't work in all environments.
0:00
/0:15

Adding Quotes in Logseq

Add an admonition block

Admonition blocks or callouts come in handy for highlighting particular piece of information in your notes, like a tip or a warning.

The warning below is the best example here.

🚧
These admonition blocks are a feature of Logseq app. You cannot expect this to work properly in other apps. So, plain text markdown users should take care in this scenario.

The usual Org-mode syntax for these blocks is:

#+BEGIN_<BLOCK NAME>
Your Block Text
#+END_<BLOCK NAME>

For example, a simple tip block syntax looks like:

#+BEGIN_TIP
This is a tip block
#+END_TIP

Let's take a look at some other interesting syntax names:

BLOCK NAME
NOTE
TIP
IMPORTANT
CAUTION
PINNED
Formatting Text in Logseq
Admonition Blocks in Logseq.

You can access this by typing the < key and then searching for the required block.

0:00
/0:27

Admonition blocks in Logseq.

Conclusion

The ability to add a call out box makes your notes more useful, in my opinion. At least it does for me as I can highlight important information in my notes. I am a fan of them and you can see plenty of them in my articles on It's FOSS as well.

Stay tuned with me in this series as I'll share about adding references in Logseq in the next part.

by: Abhishek Prakash
Thu, 10 Apr 2025 05:17:14 GMT


FOSS Weekly #25.15: Clapgrep, APT 3.0, Vibe Coding, AI in Firefox and More

Linux YouTuber Brodie Robertson liked It's FOSS' April Fool joke so much that he made a detailed video on it. It's quite fun to watch, actually 😄

💬 Let's see what else you get in this edition

  • A new APT release.
  • Photo management software
  • Steam Client offering many refinements for Linux.
  • And other Linux news, tips, and, of course, memes!
  • This edition of FOSS Weekly is supported by Internxt.
SPONSORED
FOSS Weekly #25.15: Clapgrep, APT 3.0, Vibe Coding, AI in Firefox and More

❇️ Future-Proof Your Cloud Storage With Post-Quantum Encryption

Get 82% off any Internxt lifetime plan—a one-time payment for private, post-quantum encrypted cloud storage.

No subscriptions, no recurring fees, 30-day money back policy.

Get this deal

📰 Linux and Open Source News

The APT 3.0 release has finally arrived with a better user experience.

A Colorful APT 3.0 Release Impresses with its New Features
The latest APT release features a new solver, alongside several user experience enhancements.
FOSS Weekly #25.15: Clapgrep, APT 3.0, Vibe Coding, AI in Firefox and More

🧠 What We’re Thinking About

Mozilla has begun the initial implementation of AI features into Firefox.

I Tried This Upcoming AI Feature in Firefox
Firefox will be bringing an experimental AI-generated link previews, offering quick on-device summaries. Here’s my quick experience with it.
FOSS Weekly #25.15: Clapgrep, APT 3.0, Vibe Coding, AI in Firefox and More

🧮 Linux Tips, Tutorials and More

7 Code Editors You Can Use for Vibe Coding on Linux
Want to try vibe coding? Here are the best editors I recommend using on Linux.
FOSS Weekly #25.15: Clapgrep, APT 3.0, Vibe Coding, AI in Firefox and More

Desktop Linux is mostly neglected by the industry but loved by the community. For the past 12 years, It's FOSS has been helping people use Linux on their personal computers. And we are now facing the existential threat from AI models stealing our content.

If you like what we do and would love to support our work, please become It's FOSS Plus member. It costs $24 a year (less than the cost of a burger meal each month) and you get an ad-free reading experience with the satisfaction of helping the desktop Linux community.

Join It's FOSS Plus

👷 Homelab and Maker's Corner

This time, we have a DIY biosignal tool that can be used for neuroscience research and education purposes.

DIY Neuroscience: Meet the Open Source PiEEG Kit for Brain and Body Signals
The PiEEG kit is an open source, portable biosignal tool designed for research, measuring EEG, EMG, EKG, and EOG signals. Want to crowdfund the project?
FOSS Weekly #25.15: Clapgrep, APT 3.0, Vibe Coding, AI in Firefox and More

✨ Apps Highlight

Clapgrep is a powerful open source search tool for Linux.

Clapgrep: An Easy-to-Use Open Source Linux App To Search Through Your PDFs and Text Documents
Want to look for something in your text documents? Use Clapgrep to quick search for it!
FOSS Weekly #25.15: Clapgrep, APT 3.0, Vibe Coding, AI in Firefox and More

📽️ Videos I am Creating for You

See the new features in APT 3.0 in action in our latest video.

🧩 Quiz Time

Take a trip down memory lane with our 80s Nostalgic Gadgets puzzle.

80s Nostalgic Gadgets
Remember the 80s? This quiz is for you :)
FOSS Weekly #25.15: Clapgrep, APT 3.0, Vibe Coding, AI in Firefox and More

How sharp is your Git knowledge? Our latest crossword will test your knowledge.

💡 Quick Handy Tip

In Firefox, you can delete temporary browsing data using the "Forget" button. First, right-click on the toolbar and select "Customize Toolbar".

FOSS Weekly #25.15: Clapgrep, APT 3.0, Vibe Coding, AI in Firefox and More

Now, from the list, drag and drop the "Forget" button to the toolbar. If you click on it, you will be asked to clear 5 min, 2 hrs, and 24 hrs of browsing data, pick any one of them and click on "Forget!".

FOSS Weekly #25.15: Clapgrep, APT 3.0, Vibe Coding, AI in Firefox and More

🤣 Meme of the Week

The glow up is real with this one. 🤭

FOSS Weekly #25.15: Clapgrep, APT 3.0, Vibe Coding, AI in Firefox and More

🗓️ Tech Trivia

On April 7, 1964, IBM introduced the System/360, the first family of computers designed to be fully compatible with each other. Unlike earlier systems, where each model had its own unique software and hardware.

🧑‍🤝‍🧑 FOSSverse Corner

One of our regular FOSSers played around with ARM64 on Linux and liked it.

ARM64 on Linux is Fun!
Hi, I’ve been playing with my Pinebook Pro lately and tried Armbian, Manjaro, Void and Gentoo on it. It’s been fun! New things learned like boot from u-boot, then moving to tow-boot as “first boot loader” which starts grub. I tried four distroes on a SD, Manjaro was the official and Armbian also was an .iso. Void and Gentoo I installed thrue chroot manually. I’m biased but it says something (at least I think so) that I did a Gentoo install twice to this small laptop. First one was just to try it…
FOSS Weekly #25.15: Clapgrep, APT 3.0, Vibe Coding, AI in Firefox and More

❤️ With love

Share it with your Linux-using friends and encourage them to subscribe (hint: it's here).

Share the articles in Linux Subreddits and community forums.

Follow us on Google News and stay updated in your News feed.

Opt for It's FOSS Plus membership and support us 🙏

Enjoy FOSS 😄

by: Sreenath
Mon, 07 Apr 2025 16:18:54 GMT


Installing Logseq Knowledge Management Tool on Linux

Logseq is a versatile open source tool for knowledge management. It is regarded as one of the best open source alternatives to the popular proprietary tool Obsidian.

While it covers the basics of note-taking, it also doubles down as a powerful task manager and journaling tool.

Installing Logseq Knowledge Management Tool on Linux
Logseq Desktop

What sets Logseq apart from traditional note-taking apps is its unique organization system, which forgoes hierarchical folder structures in favor of interconnected, block-based notes. This makes it an excellent choice for users seeking granular control and flexibility over their information.

In this article, we’ll explore how to install Logseq on Linux distributions.

Use the official AppImage

For Linux systems, Logseq officially provides an AppImage. You can head over to the downloads page and grab the AppImage file.

It is advised to use tools like AppImageLauncher (hasn't seen a new release for a while, but it is active) or GearLever to create a desktop integration for Logseq.

Fret not, if you would rather not use a third-party tool, you can do it yourself as well.

First, create a folder in your home directory to store all the AppImages. Next, move the Logseq AppImage to this location and give the file execution permission.

Installing Logseq Knowledge Management Tool on Linux
Go to AppImage properties

Right-click on the AppImage file and go to the file properties. Here, in the Permissions tab, select "Allow Executing as a Program" or "Executable as Program" depending on the distro, but it has the same meaning.

Here's how it looks on a distribution with GNOME desktop:

Installing Logseq Knowledge Management Tool on Linux
Toggle Execution permission

Once done, you can double-click to open Logseq app.

🚧
If you are using Ubuntu 24.04 and above, you won't be able to open the AppImage of Logseq due to a change in the apparmour policy. You can either use other sources like Flatpak or take a look at a less secure alternative.

Alternatively, use the 'semi-official' Flatpak

Logseq has a Flatpak version available. This is not an official offering from the Logseq team, but is provided by a developer who also contributes to Logseq.

First, make sure your system has Flatpak support. If not, enable Flatpak support and add Flathub repository by following our guide:

Using Flatpak on Linux [Complete Guide]
Learn all the essentials for managing Flatpak packages in this beginner’s guide.
Installing Logseq Knowledge Management Tool on Linux

Now, install Logseq either from a Flatpak supported software center like GNOME Software:

Installing Logseq Knowledge Management Tool on Linux
Install Logseq from GNOME Software

Or install it using the terminal with the following command:

flatpak install flathub com.logseq.Logseq

Other methods

For Ubuntu users and those who have Snap setup, there is an unofficial Logseq client in the Snap store. You can go with that if you prefer.

There are also packages available in the AUR for Logseq desktop clients. Arch Linux users can take a look at these packages and get it installed via the terminal using Pamac package manager.

Post Installation

Once you have installed Logseq, open it. This will bring you to the temporary journal page.

You need to open a local folder for Logseq to start your work to avoid potential data loss. For this, click on the "Add a graph" button on the top-right, as shown in the screenshot below.

Installing Logseq Knowledge Management Tool on Linux
Click on "Add a graph"

On the resulting page, click on "Choose a folder" button.

Installing Logseq Knowledge Management Tool on Linux
Click "Choose a folder"

From the file chooser, either create a new directory or select an existing directory and click "Open".

Installing Logseq Knowledge Management Tool on Linux
Select a location

That's it. You can start using Logseq now. And I'll help you with that. I'll be sharing regular tutorials on using Logseq for the next few days/weeks here. Stay tuned.

by: Abhishek Kumar
Sat, 05 Apr 2025 06:40:23 GMT


7 Code Editors You Can Use for Vibe Coding on Linux

There was a time when coding meant painstakingly writing every line, debugging cryptic errors at 3 AM, and pretending to understand regex. But in 2025? Coding has evolved, or rather, it has vibed into something entirely new.

Enter Vibe Coding, a phenomenon where instead of manually structuring functions and loops, you simply tell AI what you want, and it does the hard work for you.

This approach has taken over modern software development. Tools like Cursor and Windsurf, AI-powered code editors built specifically for this new workflow, are helping developers create entire applications without in-depth coding knowledge.

Gone are the days of memorizing syntax. Now, you can describe an app idea in plain English, and AI will generate, debug, and even refactor the code for you.

At first, it sounded too good to be true. But then people started launching SaaS businesses with nothing but Vibe Coding, using AI to write everything from landing pages to backend logic.

We thought, since the future of coding is AI-assisted, you’ll need the right tools to make the most of it.

So, here’s a handpicked list of the best code editors for vibe coding in 2025, designed to help you turn your wildest ideas into real projects, fast. 💨

🚧
NON-FOSS Warning: Not all the editors mentioned in this article are open source. While some are, many of the AI-powered features provided by these tools rely on cloud services that often include a free tier, but are not entirely free to use. AI compute isn't cheap! When local LLM support is available, I've made sure to mention it specifically. Always check the official documentation or pricing page before diving in.

1. Zed

7 Code Editors You Can Use for Vibe Coding on Linux

If VS Code feels sluggish and Cursor is a bit too heavy on the vibes, then Zed might just be your new favorite playground.

Written entirely in Rust, Zed is built for blazing fast speed. It’s designed to utilize multiple CPU cores and your GPU, making every scroll, search, and keystroke snappy as heck.

And while it's still a relatively new player in the editor world, the Zed team is laser-focused on building the fastest, most seamless AI-native code editor out there.

You get full AI interaction built right into the editor, thanks to the Assistant Panel and inline assistants that let you refactor, generate, and edit code using natural language, without leaving your flow.

Want to use Claude 3.5, a self-hosted LLM via Ollama, or something else? Zed’s open API lets you plug in what works for you.

Key Features:

✅ Built entirely in Rust for extreme performance and low latency.
✅ Native AI support with inline edits, slash commands, and fast refactoring.
✅ Assistant Panel for controlling AI interactions and inspecting suggestions.
✅ Plug-and-play LLM support, including Ollama and Claude via API.
✅ Workflow Commands to automate complex tasks across multiple files.
✅ Custom Slash Commands with WebAssembly or JSON for tailored AI workflows.

2. Flexpilot IDE

7 Code Editors You Can Use for Vibe Coding on Linux

Flexpilot IDE joins the growing league of open-source, AI-native code editors that prioritize developer control and privacy.

Forked from VS Code, it's designed to be fully customizable, letting you bring your own API keys or run local LLMs (like via Ollama) for a more private and cost-effective AI experience.

Much like Zed, it takes a developer-first approach: no locked-in services, no mysterious backend calls. Just a clean, modern editor that plays nice with whatever AI setup you prefer.

Key Features

✅ AI-powered autocomplete with context-aware suggestions
✅ Simultaneously edit multiple files in real-time with AI assistance
✅ Ask code-specific questions in a side panel for instant guidance
✅ Refactor, explain, or improve code directly in your files
✅ Get instant AI help with a keyboard shortcut, no interruptions
✅ Talk to your editor and get code suggestions instantly
✅ Run commands and debug with AI assistance inside your terminal
✅ Reference code elements and editor data precisely
✅ AI-powered renaming of variables, functions, and classes
✅ Generate commit messages and PR descriptions in a click
✅ Track token consumption across AI interactions
✅ Use any LLM: OpenAI, Claude, Mistral, or local Ollama
✅ Compatible with GitHub Copilot and other VSCode extensions

3. VS Code with GitHub Copilot

7 Code Editors You Can Use for Vibe Coding on Linux

While GitHub Copilot isn’t a standalone code editor, it’s deeply integrated into Visual Studio Code, which makes sense since Microsoft owns both GitHub and VS Code.

As one of the most widely used AI coding assistants, Copilot provides real-time AI-powered code suggestions that adapt to your project’s context.

Whether you’re writing Python scripts, JavaScript functions, or even Go routines, Copilot speeds up development by generating entire functions, automating repetitive tasks, and even debugging your code.

Key Features:

✅ AI-driven code suggestions in real-time.
✅ Supports multiple languages, including Python, JavaScript, and Go.
✅ Seamless integration with VS Code, Neovim, and JetBrains IDEs.
✅ Free for students and open-source developers.

4. Pear AI

7 Code Editors You Can Use for Vibe Coding on Linux

Pear AI is a fork of VSCode, built with AI-first development in mind. It’s kinda like Cursor or Windsurf, but with a twist, you can plug in your own AI server, run local models via Ollama (which is probably the easiest route), or just use theirs.

It has autocomplete, context-aware chat, and a few other handy features.

Now, full transparency, it's still a bit rough around the edges. Not as polished, a bit slow at times, and the updates? Eh, not super frequent.

The setup can feel a little over-engineered if you’re just trying to get rolling. But… I see potential here. If the right devs get their hands on it, this could shape up into something big.

Key Features

✅ VSCode-based editor with a clean UI and familiar feel
✅ "Knows your code" – context-aware chat that actually understands your project
✅ Works with remote APIs or local LLMs (Ollama integration is the easiest)
✅ Built-in AI code generation tools curated into a neat catalog
✅ Autocomplete and inline code suggestions, powered by your model of choice
✅ Ideal for devs experimenting with custom AI backends or local AI setups

5. Fleet by JetBrains

7 Code Editors You Can Use for Vibe Coding on Linux

If you've ever written Java, Python, or even Kotlin, chances are you’ve used or at least heard of JetBrains IDEs like IntelliJ IDEA, PyCharm, or WebStorm.

JetBrains has long been the gold standard for feature-rich developer environments.

Now, they're stepping into the future of coding with Fleet, a modern, lightweight, and AI-powered code editor designed to simplify your workflow while keeping JetBrains' signature intelligence baked in.

Fleet isn’t trying to replace IntelliJ, it’s carving a space of its own: minimal UI, fast startup, real-time collaboration, and enough built-in tools to support full-stack projects out of the box.

And with JetBrains’ new AI assistant baked in, you're getting contextual help, code generation, and terminal chat, all without leaving your editor.

Key Features

✅ Designed for fast startup and low memory usage without sacrificing features
✅ Full-Stack Language Support- Java, Kotlin, JavaScript, TypeScript, Python, Go, and more
✅ Real-Time Collaboration.
✅ Integrated Git Tools like Diff viewer, branch management, and seamless commits
✅ Use individual or shared terminals in collaborative sessions
✅ Auto-generate code, fix bugs, or chat with your terminal
✅ Docker & Kubernetes Support - Manage containers right inside your IDE
✅ Preview, format, and edit Markdown files with live previews
✅ Custom themes, keymaps, and future language/tech support via plugins

6. Cursor

7 Code Editors You Can Use for Vibe Coding on Linux

Cursor is a heavily modified fork of VSCode with deep AI integration. It supports multi-file editing, inline chat, autocomplete for code, markdown, and even JSON.

It’s fast, responsive, and great for quickly shipping out tutorials or apps. You also get terminal autocompletion and contextual AI interactions right in your editor.

Key Features

✅ Auto-imports and suggestions optimized for TypeScript and Python
✅ Generate entire app components or structures with a single command
✅ Context-gathering assistant that can interact with your terminal
✅ Drag & drop folders for AI-powered explanations and refactoring
✅ Process natural language commands inside the terminal
✅ AI detects issues in your code and suggests fixes
✅ Choose from GPT-4o, Claude 3.5 Sonnet, o1, and more

7. Windsurf (Previously Codeium)

7 Code Editors You Can Use for Vibe Coding on Linux

Windsurf takes things further with an agentic approach, it can autonomously run scripts, check outputs, and continue building based on the results until it fulfills your request.

Though it’s relatively new, Windsurf shows massive promise with smooth performance and smart automation packed into a familiar development interface.

Built on (you guessed it) VS Code, Windsurf is crafted by Codeium and introduces features like Supercomplete and Cascade, focusing on deep workspace understanding and intelligent, real-time code generation.

Key Features

✅ SuperComplete for context-aware, full-block code suggestions across your entire project
✅ Real-time chat assistant for debugging, refactoring, and coding help across languages
✅ Command Palette with custom commands.
✅ Cascade feature for syncing project context and iterative problem-solving
✅ Flow tech for automatic workspace updates and intelligent context awareness
✅ Supports top-tier models like GPT-4o, Claude 3.5 Sonnet, LLaMA 3.1 70B & 405B

It’s still new but shows a lot of promise with smooth performance and advanced automation capabilities baked right in.

Final thoughts

I’ve personally used GitHub Copilot’s free tier quite a bit, and recently gave Zed AI a spin and I totally get why the internet is buzzing with excitement.

There’s something oddly satisfying about typing a few lines of instruction and then just... letting your editor take over while you lean back.

That said, I’ve also spent hours untangling some hilariously off-mark Copilot-generated bugs. So yeah, it’s powerful, but far from perfect.

If you’re just stepping into the AI coding world, don’t dive in blind. Take time to learn the basics, experiment with different editors and assistants, and figure out which one actually helps you ship code your way.

And if you're already using an AI editor you swear by, let us know in the comments. Always curious to hear what other devs are using.

by: Abhishek Prakash
Thu, 03 Apr 2025 04:28:54 GMT


FOSS Weekly #25.14: Fedora 42 COSMIC, OnePackage, AppImage Tools and More Linux Stuff

Linux distributions agreeing to a single universal packaging system? That sounds like a joke, right? That's because it is.

It's been a tradition of sort to prank readers on 1st of April with a humorous article. Since we are already past the 1st April in all time zones, let me share this year's April Fool article with you. I hope you find it as amusing as I did while writing it 😄

No Snap or FlatPak! Linux Distros Agreed to Have Only One Universal Packaging
Is this the end of fragmentation for Linux?
FOSS Weekly #25.14: Fedora 42 COSMIC, OnePackage, AppImage Tools and More Linux Stuff

💬 Let's see what else you get in this edition

  • Vivaldi offering free built-in VPN.
  • Tools to enhance AppImage experience.
  • Serpent OS going through a rebranding.
  • And other Linux news, tips, and, of course, memes!
  • This edition of FOSS Weekly is supported by Typesense.

❇️ Typesense: Open Source Search Engine

Typesense is the free, open-source search engine for forward-looking devs. Make it easy on people: Tpyos? Typesense knows we mean typos, and they happen. With ML-powered typo tolerance and semantic search, Typesense helps your customers find what they’re looking for—fast.

Check them out on GitHub.

GitHub - typesense/typesense: Open Source alternative to Algolia + Pinecone and an Easier-to-Use alternative to ElasticSearch ⚡ 🔍 ✨ Fast, typo tolerant, in-memory fuzzy Search Engine for building delightful search experiences
Open Source alternative to Algolia + Pinecone and an Easier-to-Use alternative to ElasticSearch ⚡ 🔍 ✨ Fast, typo tolerant, in-memory fuzzy Search Engine for building delightful search experiences -…
FOSS Weekly #25.14: Fedora 42 COSMIC, OnePackage, AppImage Tools and More Linux Stuff

📰 Linux and Open Source News

🧠 What We’re Thinking About

Thank goodness Linux saves us from this 🤷

New Windows 11 build makes mandatory Microsoft Account sign-in even more mandatory
“Bypassnro” is an easy MS Account workaround for Home and Pro Windows editions.
FOSS Weekly #25.14: Fedora 42 COSMIC, OnePackage, AppImage Tools and More Linux Stuff

🧮 Linux Tips, Tutorials and More

Love AppImage? These tools will help you improve your AppImage experience.

5 Tools to Enhance Your AppImage Experience on Linux
Love using AppImages but hate the mess? Check out these handy tools that make it super easy to organize, update, and manage AppImages on your Linux system.
FOSS Weekly #25.14: Fedora 42 COSMIC, OnePackage, AppImage Tools and More Linux Stuff

👷 Homelab and Maker's Corner

Don't lose knowledge! Self-host your own Wikipedia or Arch Wiki:

Taking Knowledge in My Own Hands By Self Hosting Wikipedia and Arch Wiki
Doomsday or not, knowledge should be preserved.
FOSS Weekly #25.14: Fedora 42 COSMIC, OnePackage, AppImage Tools and More Linux Stuff

✨ Apps Highlight

Find yourself often forgetting things? Then you might need a reminder app like Tasks.org.

Ditch Proprietary Reminder Apps, Try Tasks.org Instead
Stay organized with Tasks.org, an open source to-do and reminders app that doesn’t sell your data.
FOSS Weekly #25.14: Fedora 42 COSMIC, OnePackage, AppImage Tools and More Linux Stuff

📽️ Videos I am Creating for You

I tested COSMIC alpha on Fedora 42 beta in the latest video. And I have taken some of the feedback to improve the audio quality in this one.

🧩 Quiz Time

Can you solve this riddle?

Riddler’s Back: Open-Source App Quiz
Guess the open-source applications following the riddles.
FOSS Weekly #25.14: Fedora 42 COSMIC, OnePackage, AppImage Tools and More Linux Stuff

After you are done with that, you can try your hand at matching Linux apps with their roles.

💡 Quick Handy Tip

In KDE Plasma, you can edit copied texts in the Clipboard. First, launch the clipboard using the shortcut CTRL+V. Now, click on the Edit button, which looks like a pencil.

FOSS Weekly #25.14: Fedora 42 COSMIC, OnePackage, AppImage Tools and More Linux Stuff

Then, edit the contents and click on Save to store it as a new clipboard item.

FOSS Weekly #25.14: Fedora 42 COSMIC, OnePackage, AppImage Tools and More Linux Stuff

🤣 Meme of the Week

Such a nice vanity plate. 😮

FOSS Weekly #25.14: Fedora 42 COSMIC, OnePackage, AppImage Tools and More Linux Stuff

🗓️ Tech Trivia

On March 31, 1939, Harvard and IBM signed an agreement to build the Mark I, also known as the IBM Automatic Sequence Controlled Calculator (ASCC).

This pioneering electromechanical computer, conceived by Howard Aiken, interpreted instructions from paper tape and data from punch cards, playing a significant role in World War II calculations.

🧑‍🤝‍🧑 FOSSverse Corner

FOSSers are discussing which is the most underrated Linux distribution out there. Care to share your views?

What is the most underrated Linux distribution?
There are some distros like Debian, Ubuntu and Mint that are commonly used and everyone knows how good they are. but There are others that are used only by a few people and perform equally as well. Would you like to nominate your choice for the most underrated Linux distro? I will nominate Void Linux… it is No 93 on distrowatch and performs for me as well as MX Linux or Debian.
FOSS Weekly #25.14: Fedora 42 COSMIC, OnePackage, AppImage Tools and More Linux Stuff

❤️ With love

Share it with your Linux-using friends and encourage them to subscribe (hint: it's here).

Share the articles in Linux Subreddits and community forums.

Follow us on Google News and stay updated in your News feed.

Opt for It's FOSS Plus membership and support us 🙏

Enjoy FOSS 😄

by: Sreenath
Wed, 02 Apr 2025 10:50:07 GMT


5 Tools to Enhance Your AppImage Experience on Linux

The portable AppImage format is quite popular among developers and users alike. It allows you to run applications without installation or dependency issues, on virtually any Linux distribution.

However, managing multiple AppImages or keeping them updated can sometimes be a bit cumbersome. Fortunately, there are third-party tools that simplify the process, making it easier to organize, update, and integrate AppImages into your Linux system.

In this article, I’ll share some useful tools that can help you manage AppImages more effectively and enhance your overall experience.

Gear Lever

Gear Lever is a modern GTK-based application that lets you manage your local AppImage files. It primarily helps you organize AppImages by adding desktop entries, updating applications, and more.

5 Tools to Enhance Your AppImage Experience on Linux
Installed AppImages in Gear Lever

Features of Gear Lever

  • Drag and drop files directly from your file manager
  • Update apps in place
  • Keep multiple versions installed

Install Gear Lever

Gear Lever is available as a Flatpak package. You can install it with the following command:

flatpak install flathub it.mijorus.gearlever

AppImage Launcher

📋
While the last release of AppImage Launcher was a few years ago, it works pretty fine.

If you're a frequent user of AppImage packages, you should definitely check out AppImage Launcher. This open-source tool helps integrate AppImages into your system.

It allows users to quickly add AppImages to the application menu, manage updates, and remove them with just a few clicks.

5 Tools to Enhance Your AppImage Experience on Linux
AppImage Launcher

Features of AppImage Launcher

  • Adds desktop integration to AppImage files
  • Includes a helper tool to manage AppImage updates
  • Allows easy removal of AppImages
  • Provides CLI tools for terminal-based operations and automation

Install AppImage Launcher

For Ubuntu users, the .deb file is available under the Continuous build section on the releases page.

AppImage Package Manager and AppMan

AppImage Package Manager (AM) is designed to simplify AppImage management, functioning similarly to how APT or DNF handle native packages. It supports not just AppImages, but other portable formats as well.

AM relies on a large database of shell scripts, inspired by the Arch User Repository (AUR), to manage AppImages from various sources.

A similar tool is AppMan. It is basically AM but manages all your apps locally without needing root access.

If you are a casual user, you can use AppMan instead of AM so that everything will be local and no need for any sudo privileges.

AppImage Package Manager (AppMan Version)

Features of AppImage Package Manager

  • Supports AppImages and standalone archives (e.g., Firefox, Blender)
  • Includes a comprehensive shell script database for official and community-sourced AppImages
  • Create and restore snapshots
  • Drag-and-drop AppImage integration
  • Convert legacy AppImage formats

Install AppImage Package Manager

To install, run the following commands:

wget -q https://raw.githubusercontent.com/ivan-hc/AM/main/AM-INSTALLER && chmod a+x ./AM-INSTALLER && ./AM-INSTALLER

The installer will prompt you to choose between AM and AppMan. Choose AppMan if you prefer local, privilege-free management.

AppImagePool

AppImagePool is a Flutter-based client for AppImage Hub. It offers a clean interface to browse and download AppImages listed on AppImage Hub.

5 Tools to Enhance Your AppImage Experience on Linux
AppImage Pool client home page

Features of AppImagePool

  • Categorized list of AppImages
  • Download from GitHub directly, no extra-server involved
  • Integrate and Disintegrate AppImages easily from your system
  • Version History and multi download support

Installing AppImage Pool

Download the AppImage file from the official GitHub releases page.

There is a Flatpak package is available to install from Flathub. If your system has Flatpak support, use the command:

flatpak install flathub io.github.prateekmedia.appimagepool

Zap

📋
The last release of Zap was a few years ago but it worked fine in my testing.

Zap is an AppImage package manager written in Go. It allows you to install, update, and integrate AppImage packages efficiently.

0:00
/0:37

Zap AppImage package Manager

Features of Zap

  • Install packages from the AppImage catalog using registered names
  • Select and install specific versions
  • Use the Zap daemon for automatic update checks
  • Install AppImages from GitHub releases

Install Zap

To install Zap locally, run:

curl https://raw.githubusercontent.com/srevinsaju/zap/main/install.sh | bash -s

For a system-wide installation, run:

curl https://raw.githubusercontent.com/srevinsaju/zap/main/install.sh | sudo bash -s

In the end...

Here are a few more resources that an AppImage lover might like:

  • Bauh package manager: bauh is a graphical interface for managing various Linux package formats like AppImage, Deb, Flatpak, etc.
  • XApp-Thumbnailers: This is a thumbnail generation tool for popular file managers.
  • Awesome AppImage: Lists several AppImage tools and resources.

AppImage is a fantastic way to use portable applications on Linux, but managing them manually can be tedious over time. Thankfully, the tools mentioned above make it easier to organize, update, and integrate AppImages into your workflow.

From a feature-rich GUI tool like Gear Lever to CLI tools like AppImagePool and AppMan, there’s something here for every kind of user. Try out a few and see which one fits your style best.

by: Abhishek Prakash
Thu, 27 Mar 2025 04:38:19 GMT


FOSS Weekly #25.13: Kernel 6.14, Zorin 17.3, EU OS, apt Guide and More Linux Stuff

Rust in Linux kernel is not news. You already know about that. But Rust in GNU is a big move.

It seems that a Rust rewrite of GNU's coreutils (meta package that gives us commands like cp, ls, dd, mv etc) will be included in Ubuntu's upcoming release.

This concerns many hardcore Free Software supporters, as they see it a move to take GNU out of GNU Linux.

What are your thoughts on it?

💬 Let's see what else you get in this edition

  • Chimera Linux moving away from RISC-V.
  • Beginner's guide to apt command.
  • A new community Linux distro being proposed for the EU.
  • Linux kernel 6.14 releasing with many refinements.
  • And other Linux news, tips, and, of course, memes!
  • This edition of FOSS Weekly is supported by PikaPods.

❇️ PikaPods: Enjoy Self-hosting Hassle-free

PikaPods allows you to quickly deploy your favorite open source software. All future updates are handled automatically by PikaPods while you enjoy using the software. PikaPods also share revenue with the original developers of the software.

You get a $5 free credit to try it out and see if you can rely on PikaPods. I know, you can 😄

PikaPods - Instant Open Source App Hosting
Run the finest Open Source web apps from $1.20/month, fully managed, no tracking, no ads, full privacy. Self-hosting was never this convenient.
FOSS Weekly #25.13: Kernel 6.14, Zorin 17.3, EU OS, apt Guide and More Linux Stuff

📰 Linux and Open Source News

Linux kernel 6.14 has arrived with performance gains and new support:

Linux Kernel 6.14 Arrives With Performance Gains for AMD, Intel, and RISC-V
The second major Linux kernel release of 2025 has arrived!
FOSS Weekly #25.13: Kernel 6.14, Zorin 17.3, EU OS, apt Guide and More Linux Stuff

🧠 What We’re Thinking About

A new community-led initiative called “EU OS” to develop a Linux distribution initiative looks like a positive development.

Can this become the European Union’s own Linux Distribution?
Can this Linux-powered operating system disrupt Windows’ hold in the European Union?
FOSS Weekly #25.13: Kernel 6.14, Zorin 17.3, EU OS, apt Guide and More Linux Stuff

🧮 Linux Tips, Tutorials and More

Using apt Commands in Linux [Ultimate Guide]
This guide shows you how to use apt commands in Linux with examples so that you can manage packages effectively.
FOSS Weekly #25.13: Kernel 6.14, Zorin 17.3, EU OS, apt Guide and More Linux Stuff

👷 Homelab and Maker's Corner

Run Ollama on Docker and take your AI workflow anywhere.

Setting Up Ollama With Docker [With NVIDIA GPU]
Learn to run Ollama in Docker container in this tutorial. Yes, Nvidia GPU can also be used in this setup.
FOSS Weekly #25.13: Kernel 6.14, Zorin 17.3, EU OS, apt Guide and More Linux Stuff

✨ Apps Highlight

Do you want a tool that helps with the management of your Linux system?

Linux-Assistant is a Tool You Didn’t Know You Needed!
Tired of managing your Linux installation? Linux-Assistant helps simplify common maintenance tasks, making system management easier.
FOSS Weekly #25.13: Kernel 6.14, Zorin 17.3, EU OS, apt Guide and More Linux Stuff

📽️ Videos I am Creating for You

Learn about modern alternatives to the classic Linux commands in the latest video.

🧩 Quiz Time

Do you know all of these legendary coders?

Guess the Legendary Coders
A simple quiz that challenges to identify the creator of the famous programming languages.
FOSS Weekly #25.13: Kernel 6.14, Zorin 17.3, EU OS, apt Guide and More Linux Stuff

Also, a new crossword on discontinued Linux distros.

💡 Quick Handy Tip

In GNOME, you can use the Auto Move Windows extension to automatically open new app windows in specific workspaces. First, install it either from the webpage, or via Extension Manager.

FOSS Weekly #25.13: Kernel 6.14, Zorin 17.3, EU OS, apt Guide and More Linux Stuff

In the extension settings page, select the windows and the corresponding workspace to automatically move new windows into workspaces. Now, new windows should appear in their designated workspaces.

FOSS Weekly #25.13: Kernel 6.14, Zorin 17.3, EU OS, apt Guide and More Linux Stuff

🤣 Meme of the Week

This is heartbreaking 💔

FOSS Weekly #25.13: Kernel 6.14, Zorin 17.3, EU OS, apt Guide and More Linux Stuff

🗓️ Tech Trivia

On March 24, 1896, Russian physicist Aleksandr Popov successfully transmitted radio signals over 250 meters between buildings at St. Petersburg University. This achievement followed his 1895 presentation of a wireless lightning detector.

🧑‍🤝‍🧑 FOSSverse Corner

Regular FOSSer Paul is pondering a switch to a 64-bit system on a local priest's computer. Can you help?

Updating Chromium 32-bit version or should I switch to 64-bit system with newer browser?
Strange request… I do some work for the local priest, some 6 years back I gave him a tower computer with linux mint mate running 32 bits, mainly as it was an old stock machine and not capable of better. 2 years later it died so I replaced it but just transfered the hard disk from the old machine to his newer computer. Why,? he had all his files, images etc on. But mainly his emails which he could not remember passwords for, same with his sites he uses for research. Easy option for me than res…
FOSS Weekly #25.13: Kernel 6.14, Zorin 17.3, EU OS, apt Guide and More Linux Stuff

❤️ With love

Share it with your Linux-using friends and encourage them to subscribe (hint: it's here).

Share the articles in Linux Subreddits and community forums.

Follow us on Google News and stay updated in your News feed.

Opt for It's FOSS Plus membership and support us 🙏

Enjoy FOSS 😄

by: Abhishek Kumar
Tue, 25 Mar 2025 13:41:16 GMT


Setting Up Ollama With Docker

Ollama has been a game-changer for running large language models (LLMs) locally, and I've covered quite a few tutorials on setting it up on different devices, including my Raspberry Pi.

But as I kept experimenting, I realized there was still another fantastic way to run Ollama: inside a Docker container.

Now, this isn’t exactly breaking news. The first Ollama Docker image was released back in 2023. But until recently, I always used it with a native install.

It wasn’t until I was working on an Immich tutorial that I stumbled upon NVIDIA Container Toolkit, which allows you to add GPU support to Docker containers.

That was when I got hooked on the idea of setting up Ollama inside Docker and leveraging GPU acceleration.

In this guide, I’ll walk you through two ways to run Ollama in Docker with GPU support:

  1. Using a one liner docker run command.
  2. With Docker compose

Now, let’s dive in.

📋
Before we get started, if you haven’t installed Docker yet, check out our previous tutorials on setting up Docker on Linux.

Prerequisite: Installing Nvidia Container toolkit

The NVIDIA Container Toolkit includes the NVIDIA Container Runtime and the NVIDIA Container Toolkit plugin for Docker, which enable GPU support inside Docker containers.

Before installation, make sure that you have already installed the GPU drivers on your specific distro.

Now, to install the NVIDIA Container Toolkit, follow these steps:

  1. Enable the NVIDIA CUDA repository on your system by running the following commands in a terminal window:
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt update
Setting Up Ollama With Docker
If your Nvidia GPU driver is not properly installed, you might encounter some problems when installing nvidia-container-toolkit on your system just like in my case on Debian 12.
  1. Install the NVIDIA Container Toolkit by running the following command in a terminal window:
sudo apt install -y nvidia-container-toolkit
Setting Up Ollama With Docker
  1. Restart the Docker service to apply the changes:
sudo systemctl restart docker

Method 1: Running Ollama with Docker run (Quick Method)

If you just want to spin up Ollama in a container without much hassle, this one-liner will do the trick:

docker run -d --name ollama -p 11434:11434 -v ollama:/root/.ollama ollama/ollama

Or, if you want the GPU support:

docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

Here's a breakdown of what's going on with this command:

  • docker run -d: Runs the container in detached mode.
  • --name ollama: Names the container "ollama."
  • -p 11434:11434: Maps port 11434 from the container to the host.
  • -v ollama:/root/.ollama: Creates a persistent volume for storing models.
  • ollama/ollama: Uses the official Ollama Docker image.
Setting Up Ollama With Docker

Once the container is running, you can check its status with:

docker ps

Method 2: Running Ollama with Docker compose

I personally find that docker compose is a more structured approach when setting up a service inside a container, as it's much easier to manage.

💡
If you're setting up Ollama with Open WebUI, I would suggest to use docker volumes instead of bind mounts for a less frustrating experience.

We'll start with creating a docker-compose.yml file, to manage the Ollama container:

version: '3.8'

services:
  ollama:
    image: ollama/ollama
    container_name: ollama
    ports:
      - "11434:11434"
    volumes:
      - ollama:/root/.ollama
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu]
    restart: unless-stopped

volumes:
  ollama:
Setting Up Ollama With Docker

With the docker-compose.yml file in place, start the container using:

docker-compose up -d
Setting Up Ollama With Docker

This will spin up Ollama with GPU acceleration enabled.

Accessing Ollama in Docker

Now that we have Ollama running inside a Docker container, how do we interact with it efficiently?

There are two main ways:

1. Using the Docker shell

This is really easy, you can access Ollama container shell by typing:

docker exec -it ollama <commands>
Setting Up Ollama With Docker

but typing this same command overtime can be tiring. We can create an alias to make it shorter.

Add this to your .bashrc file:

echo 'alias ollama="docker exec -it ollama ollama"' >> $HOME/.bashrc
source $HOME/.bashrc

and since I'm using zsh shell, I'll be using this command:

echo 'alias ollama="docker exec -it ollama ollama"' >> $HOME/.zshrc

Now, instead of typing the full docker exec command, you can just run:

ollama ps
ollama pull llama3
ollama run llama3
Setting Up Ollama With Docker

This makes interacting with Ollama inside Docker feel just like using a native install.

2. Using Ollama’s API with Web UI Clients

Ollama exposes an API on http://localhost:11434, allowing other tools to connect and interact with it.

If you prefer a graphical user interface (GUI) instead of the command line, you can use several Web UI clients.

Some popular tools that work with Ollama include:

  • Open WebUI – A simple and beautiful frontend for local LLMs.
  • LibreChat – A powerful ChatGPT-like interface supporting multiple backends.

We’ve actually covered 12 different tools that provide a Web UI for Ollama.

Whether you want something lightweight or a full-featured alternative to ChatGPT, there’s a UI that fits your needs.

Conclusion

Running Ollama in Docker provides a flexible and efficient way to interact with local AI models, especially when combined with a UI for easy access over a network.

I’m still tweaking my setup to ensure smooth performance across multiple devices, but so far, it’s working well.

On another note, diving deeper into NVIDIA Container Toolkit has sparked some interesting ideas. The ability to pass GPU acceleration to Docker containers opens up possibilities beyond just Ollama.

I’m considering testing it with Jellyfin for hardware-accelerated transcoding, which would be a huge boost for my media server setup.

Other projects, like Stable Diffusion or AI-powered upscaling, could also benefit from proper GPU passthrough.

That said, I’d love to hear about your setup! Are you running Ollama in Docker, or do you prefer a native install? Have you tried any Web UI clients, or are you sticking with the command line?

Drop your thoughts in the comments below.

by: Abhishek Prakash
Mon, 24 Mar 2025 07:26:39 GMT


Why do We Use pacman -Syu to System Update as Well as Package Installation in Arch Linux?

How do you update Arch Linux? You run sudo pacman -Syu command.

How do you install a package on Arch Linux? You run sudo pacman -Syu package_name.

Which might make you wonder why do you need a system update while installing a new package? What does those S, y and u do? Let me explain these things to you.

What does pacman -Syu does?

In simpler words, pacman -Syu updates all the installed packages on your Arch-based Linux distribution if they have a newer version available. Here, -S stands for sync or install, y refreshes the local package database cache with the remote repository and u will make a list of all the install packages that can be updated by referring to the local package database cache and getting actual packages from the remote repository.

Understanding pacman -Syu command

I hope you are familiar with the concept of package manager. If not, please refer to this explainer article:

What is a Package Manager in Linux?
Learn about packaging system and package managers in Linux. You’ll learn how do they work and what kind of package managers available.
Why do We Use pacman -Syu to System Update as Well as Package Installation in Arch Linux?

Pacman package manager works pretty much the same. There is a remote repository that has the actual packages, a local package database that usually keeps the information about the packages by interacting with the remote repository. pacman is the command line interface that utilizes this structure to manage packages on your Arch Linux.

Why do We Use pacman -Syu to System Update as Well as Package Installation in Arch Linux?

-S (capital letter S) is the main option and y and u are 'sub-options' supporting it.

S stands for sync but you can think of it as 'install'. It syncs your Arch Linux system with the remote repository for the given package. Meaning, both repository and local Arch system will be synced (at that time) for the given package. Which is another way of saying that the package is installed on the system.

You cannot just run pacman -S and expect it to sync (install) all the packages from the repositories on the local system. That would be disastrous if your system installs all 40,000+ packages of the remote repositories.

This is why you need to provide a target (package names) with only -S option. Otherwise, you'll see this error.

sudo pacman -S
error: no targets specified (use -h for help)

If you specify a package or group name, it will 'install' the package on your system.

There are additional options with Sync. You'll probably be using a lot of sudo pacman -Syu.

Those y and u are 'sub options' of -S. You cannot use them on their own like pacman -yu:

sudo pacman -yu
error: invalid option '-y'

While the order of S, y and u doesn't matter, there has to be an S with y and u.

The y sub-option of S refreshes the local package cache DB with remote repository. Then u sub-option is for sysupgrade which refers to the local package cache to make a list of all the installed packages that can be upgraded to a newer version.

With the work of these two sub options done, S (sync) will fetch the packages (newer versions) from the remote repository and install (update existing) them.

📋
Sometimes, I feel like it would have been better to use terms like install instead of sync and r for refresh instead of y. Easier to understand.

Why always run "pacman -Syu" even while installing a single package?

You'll notice that Arch package installation often mentions the pacman command in the following format:

sudo pacman -Syu package_name

And you may wonder what's the point of updating all the installed packages. Why not just do sudo pacman -Sy package_name which would be quicker as it will only install the package you want, not upgrade other packages that have newer versions available?

There is a pretty good reason for that. It helps avoid the dependency issues that could occur otherwise.

I liked the analogy in this Reddit discussion and I am going to use the same here as well.

Imagine an old-fashioned paper catalog folks used to get in the mail a few decades back. If you get a catalog in the mail from a store, it had a listing of everything the store had for sale and the current prices. The Arch package database is like this catalog. The catalog you have with you is the package database cache on your system.

The packages are like the actual goods you buy through the catalog. You find the item number that you want in the catalog, place the order, and the correct item is delivered.

Imagine you just run pacman -Sy. This is equivalent to getting the latest catalog.

Now, let's say you have an iPhone 14 (an outdated package) and you order an iPhone charger from the new catalog. You'll have a problem when the new charger arrives because the iPhone now uses the type C port instead of the old lightning port. A conflict arises.

If you had run pacman -Syu, you would have ordered both the newer iPhone and the correct charger with it.

(Don't take it literally and start commenting that it will be a financially stupid decision to order a new phone instead of the older charger. This is just for example 😜)

Conclusion

I don't know whether you were ever curious about it or not, but I do hope you have a slightly better understanding of the logic behind the famous -Syu option of pacman command. The man page is always there to read the official explanation of each option and its usage.

You can always explore more options of the pacman command to see what it can do for regular package management on Arch Linux.

Using pacman Commands in Arch Linux [Beginner’s Guide]
Learn what you can do with pacman commands in Linux, how to use them to find new packages, install and upgrade new packages, and clean your system.
Why do We Use pacman -Syu to System Update as Well as Package Installation in Arch Linux?

🗨️ Did this article help you understand the 'sync' concept in Arch Linux, or are you more confused than before? Do let me know in the comment section.

by: Abhishek Prakash
Mon, 24 Mar 2025 07:26:39 GMT


Understanding pacman -Syu Command in Arch Linux

How do you update Arch Linux? You run sudo pacman -Syu command.

How do you install a package on Arch Linux? You run sudo pacman -Syu package_name.

Which might make you wonder why do you need a system update while installing a new package? What does those S, y and u do? Let me explain these things to you.

What does pacman -Syu does?

In simpler words, pacman -Syu updates all the installed packages on your Arch-based Linux distribution if they have a newer version available. Here, -S stands for sync or install, y refreshes the local package database cache with the remote repository and u will make a list of all the install packages that can be updated by referring to the local package database cache and getting actual packages from the remote repository.

Understanding pacman -Syu command

I hope you are familiar with the concept of package manager. If not, please refer to this explainer article:

What is a Package Manager in Linux?
Learn about packaging system and package managers in Linux. You’ll learn how do they work and what kind of package managers available.
Understanding pacman -Syu Command in Arch Linux

Pacman package manager works pretty much the same. There is a remote repository that has the actual packages, a local package database that usually keeps the information about the packages by interacting with the remote repository. pacman is the command line interface that utilizes this structure to manage packages on your Arch Linux.

Understanding pacman -Syu Command in Arch Linux

-S (capital letter S) is the main option and y and u are 'sub-options' supporting it.

S stands for sync but you can think of it as 'install'. It syncs your Arch Linux system with the remote repository for the given package. Meaning, both repository and local Arch system will be synced (at that time) for the given package. Which is another way of saying that the package is installed on the system.

You cannot just run pacman -S and expect it to sync (install) all the packages from the repositories on the local system. That would be disastrous if your system installs all 40,000+ packages of the remote repositories.

This is why you need to provide a target (package names) with only -S option. Otherwise, you'll see this error.

sudo pacman -S
error: no targets specified (use -h for help)

If you specify a package or group name, it will 'install' the package on your system.

There are additional options with Sync. You'll probably be using a lot of sudo pacman -Syu.

Those y and u are 'sub options' of -S. You cannot use them on their own like pacman -yu:

sudo pacman -yu
error: invalid option '-y'

While the order of S, y and u doesn't matter, there has to be an S with y and u.

The y sub-option of S refreshes the local package cache DB with remote repository. Then u sub-option is for sysupgrade which refers to the local package cache to make a list of all the installed packages that can be upgraded to a newer version.

With the work of these two sub options done, S (sync) will fetch the packages (newer versions) from the remote repository and install (update existing) them.

📋
Sometimes, I feel like it would have been better to use terms like install instead of sync and r for refresh instead of y. Easier to understand.

Why some tutorials mention "pacman -Syu" even while installing a single package?

You'll notice that many tutorials on the web often mention the pacman command for package installation in the following format:

sudo pacman -Syu package_name

And you may wonder what's the point of updating all the installed packages.

Sure, you can use sudo pacman -S package_name for installing packages, and it will run fine if you keep your Arch system updated frequently.

But if you haven't run the system updates for a while, installation may throw 404 missing file error. You need to update the local package database.

Now, you may think, why not just do sudo pacman -Sy package_name which would be quicker as it will refresh package database and install only the package you want, not upgrade other packages that have newer versions available?

There is a pretty good reason for that. It helps avoid the dependency issues that could occur otherwise.

I liked the analogy in this Reddit discussion and I am going to use the same here as well.

Imagine an old-fashioned paper catalog folks used to get in the mail a few decades back. If you get a catalog in the mail from a store, it had a listing of everything the store had for sale and the current prices. The Arch package database is like this catalog. The catalog you have with you is the package database cache on your system.

The packages are like the actual goods you buy through the catalog. You find the item number that you want in the catalog, place the order, and the correct item is delivered.

Imagine you just run pacman -Sy. This is equivalent to getting the latest catalog.

Now, let's say you have an iPhone 14 (an outdated package) and you order an iPhone charger from the new catalog. You'll have a problem when the new charger arrives because the iPhone now uses the type C port instead of the old lightning port. A conflict arises.

If you had run pacman -Syu, you would have ordered both the newer iPhone and the correct charger with it.

(Don't take it literally and start commenting that it will be a financially stupid decision to order a new phone instead of the older charger. This is just for example 😜)

Conclusion

I don't know whether you were ever curious about it or not, but I do hope you have a slightly better understanding of the logic behind the famous -Syu option of pacman command. The man page is always there to read the official explanation of each option and its usage.

You can always explore more options of the pacman command to see what it can do for regular package management on Arch Linux.

Using pacman Commands in Arch Linux [Beginner’s Guide]
Learn what you can do with pacman commands in Linux, how to use them to find new packages, install and upgrade new packages, and clean your system.
Understanding pacman -Syu Command in Arch Linux

🗨️ Did this article help you understand the 'sync' concept in Arch Linux, or are you more confused than before? Do let me know in the comment section.

by: Abhishek Prakash
Thu, 20 Mar 2025 05:18:33 GMT


FOSS Weekly #25.12: GNOME 48 and GIMP 3.0 Released, Switching to IceWM, Ollama Commands and More Linux Stuff

We reached the 30,000 followers mark on Mastodon. This is an unexpected feat.

We have 140,000 people on our Twitter profile but that's because Twitter/X is a bigger platform. I am pleasantly surprised to see so many people on an alternative, decentralized platform like Mastodon.

If you use Mastodon, do join us there.

It&#39;s FOSS (@itsfoss@mastodon.social)
6.2K Posts, 27 Following, 30.2K Followers · World’s leading Linux and Open Source web portal. https://itsfoss.com/
FOSS Weekly #25.12: GNOME 48 and GIMP 3.0 Released, Switching to IceWM, Ollama Commands and More Linux Stuff

💬 Let's see what else you get in this edition

  • AntiX and IceWM reviving an old computer.
  • Roblox introducing a new open source AI model.
  • A new GIMP release arriving after a decade of development.
  • And other Linux news, tips, and, of course, memes!
  • This edition of FOSS Weekly is supported by ANY.RUN.

🤖 ANY.RUN’s Instant Android Threat Analysis Is Live – Now Available to Everyone

ANY.RUN’s Interactive Sandbox now supports Android OS, making mobile malware detection faster, smarter, and more effective in a secure, real-time environment.

Now your team can analyze Android malware behavior just like on a real device: interact with possible threats and speed up response times.

Be among the first to try this game-changing upgrade and help your team:

  • Expand threat visibility with real-time APK analysis
  • Reduce incident response times
  • Simplify threat hunting
  • Lower cybersecurity costs

…all from one convenient, cloud-based environment ☁️

Available for ALL plans, including Free. Start your first analysis now!

Interactive Online Malware Analysis Sandbox - ANY.RUN
Cloud-based malware analysis service. Take your information security to the next level. Analyze suspicious and malicious activities using our innovative tools.
FOSS Weekly #25.12: GNOME 48 and GIMP 3.0 Released, Switching to IceWM, Ollama Commands and More Linux Stuff

📰 Linux and Open Source News

GNOME 48 is here to bring a modern desktop experience to Linux.

GNOME 48 Released With Focus on Your Digital Wellbeing
It took its time, but GNOME 48 is finally here with some rather interesting changes.
FOSS Weekly #25.12: GNOME 48 and GIMP 3.0 Released, Switching to IceWM, Ollama Commands and More Linux Stuff

After making us wait for 10 years, the GIMP 3.0 release has finally shown up with loads of improvements:

After a Decade of Waiting, GIMP 3.0.0 is Finally Here!
At last, GIMP 3.0 has arrived.
FOSS Weekly #25.12: GNOME 48 and GIMP 3.0 Released, Switching to IceWM, Ollama Commands and More Linux Stuff

🧠 What We’re Thinking About

One of our community contributors switched from Xfce on EndeavourOS to IceWM on AntiX. They shared how it went.

Switching From Xfce to IceWM With AntiX, My Old Computer is Back in Action Again
How I switched from Xfce on EndeavourOS to IceWM on antiX and customized it to fit my vibe.
FOSS Weekly #25.12: GNOME 48 and GIMP 3.0 Released, Switching to IceWM, Ollama Commands and More Linux Stuff

🧮 Linux Tips, Tutorials and More

👷 Homelab and Maker's Corner

Manage LLMs locally and easily by using Ollama commands.

Must Know Ollama Commands for Managing LLMs locally
Here are the ollama commands you need to know for managing your large language models effectively.
FOSS Weekly #25.12: GNOME 48 and GIMP 3.0 Released, Switching to IceWM, Ollama Commands and More Linux Stuff

✨ Apps Highlight

Keep track of the data usage on your Android smartphone with Data Monitor.

Data Monitor: The Sleek Open-Source Android App to Track Data Usage
How much data do you use on a daily/monthly basis? Data Monitor helps you track that.
FOSS Weekly #25.12: GNOME 48 and GIMP 3.0 Released, Switching to IceWM, Ollama Commands and More Linux Stuff

📽️ Videos I am Creating for You

I share how I dual booted CachyOS with Windows in this video.

🧩 Quiz Time

This fun crossword is for the fans of Debian... and/or Toy Story. And another one on open source licenses.

Open-Source Licenses: Quiz
You must learn about the open-source licenses. And, this quiz helps you do that.
FOSS Weekly #25.12: GNOME 48 and GIMP 3.0 Released, Switching to IceWM, Ollama Commands and More Linux Stuff

💡 Quick Handy Tip

In GNOME, you can add custom directories to GNOME Search. First open Settings and go to SearchSearch Locations. Here, click on Add Locations and choose a location.

FOSS Weekly #25.12: GNOME 48 and GIMP 3.0 Released, Switching to IceWM, Ollama Commands and More Linux Stuff

Now, add the locations you want to see as results in the overview. After that, whenever you search, these locations will appear in the results page if there's a match. In this quick demo, I just showed you how to do it, on your computer, avoid adding locations like /etc, /usr, etc.

FOSS Weekly #25.12: GNOME 48 and GIMP 3.0 Released, Switching to IceWM, Ollama Commands and More Linux Stuff

🤣 Meme of the Week

The list is virtually non-existent at this point. 🙂

FOSS Weekly #25.12: GNOME 48 and GIMP 3.0 Released, Switching to IceWM, Ollama Commands and More Linux Stuff

🗓️ Tech Trivia

On March 17, 1988, Apple sued Microsoft, claiming that Windows 2.0 copied the Macintosh GUI. Initially, a judge ruled that Microsoft had limited rights based on an earlier licensing agreement. The case went through appeals and eventually reached the U.S. Supreme Court, which declined to review it in 1995.

This decision effectively ended the legal battle, allowing Microsoft to continue using the Windows GUI.

🧑‍🤝‍🧑 FOSSverse Corner

An interesting read on the move by Ubuntu towards Rust.

Modernizing Ubuntu with Rust-based Tooling
Interesting article. There is a YouTube video talking about it too. It talks about rewriting GNU Coreutils in Rust.
FOSS Weekly #25.12: GNOME 48 and GIMP 3.0 Released, Switching to IceWM, Ollama Commands and More Linux Stuff

❤️ With love

Share it with your Linux-using friends and encourage them to subscribe (hint: it's here).

Share the articles in Linux Subreddits and community forums.

Follow us on Google News and stay updated in your News feed.

Opt for It's FOSS Plus membership and support us 🙏

Enjoy FOSS 😄

by: Abhishek Prakash
Wed, 19 Mar 2025 12:17:19 GMT


Set an AppImage Application as Default App

Imagine you found a cool text editor like Pulsar and downloaded it in the AppImage format. You enjoy using it and now want to make it the default application for markdown files.

You right-click on the file and click 'open with' option, but here, you don't see the Pulsar listed here.

That's a problem, right? But it can be easily fixed by creating a desktop entry for that AppImage application.

Let me show you how to do that.

Step 1: Create a desktop entry for AppImage

The very first step is to create a desktop file for the AppImage application. Here, we will use the Gear Lever app to create the desktop entry.

Gear Lever is available as a Flatpak package from FlatHub. I know. Another package format, but that's how it is.

Anyway, if you have Flatpak support enabled, install Gear Lever with this command:

flatpak install flathub it.mijorus.gearlever

Now, right-click on the AppImage file you downloaded and select Open With Gear Lever.

Set an AppImage Application as Default App
Open AppImage in Gear Lever

Click on the Unlock button in Gear Lever.

Set an AppImage Application as Default App
Click on Unlock

Now click on the "Move to app menu" button.

Set an AppImage Application as Default App
Click on the "Move to the app menu" button

Verify everything is ok by searching for the app in the system menu.

Set an AppImage Application as Default App
Verify the app integration

Great! So we have the application integrated in the desktop. Let's move to the second step.

Step 2: Setting default app through file manager

Let's say you want to open all your .txt text files in the Pulsar editor.

The easiest way to achieve is through the File Manager.

Open the file manager and right-click on the file of your choice. Now select the Open With option.

Set an AppImage Application as Default App
Select the "Open With" option

In the next window, you can start typing the name of the application to begin a search. It will also show you the AppImage program you integrated with the desktop previously.

Set an AppImage Application as Default App
Search for an App

Once you spot the app, click on it to select and then enable the "Always use for this file type" toggle button. Then click Open as shown in the screenshot below.

Set an AppImage Application as Default App
Set a default app

That's it. From now on, your file will be opened in the AppImage of your choice. To verify this, you can right-click on the file. The first entry on the context menu will be the name of your AppImage application. In this case, Pulsar.

Set an AppImage Application as Default App
First item in the context menu

Alternative method: Change apps from settings

Let's say you have an AppImage for applications like Web Browser, Music Player, etc. These can be changed from the system settings.

Given you have created the AppImage desktop entry following the first step, open the system settings in Ubuntu.

Go to Apps → Default Apps.

Here, set the apps for categories you want.

Set an AppImage Application as Default App
Set Default Browser

If you click on the drop-down menu corresponding to a category in settings, you can select an app. The AppImage app will also be listed here. In the screenshot above, you can see Vivaldi AppImage is set as the default browser.

For Linux Mint users, you can set it using the Preferred Application settings.

Set an AppImage Application as Default App
Preferred application in Linux Mint

Conclusion

A lot of AppImage 'issue' or should I say shortcomings, can be solved by desktop integration. It surprises me that AppImage doesn't provide an official way of doing these things.

Well, we have the wonderful open source developers that help us by creating helpful utilities like Gear Lever here.

I hope this quick little tip helps you enjoy your AppImages 😄

by: Abhishek Kumar
Mon, 17 Mar 2025 15:44:13 GMT


Must Know Ollama Commands for Managing LLMs locally

Ollama is one of the easiest ways for running large language models (LLMs) locally on your own machine.

It's like Docker. You download publicly available models from Hugging Face using its command line interface. Connect Ollama with a graphical interface and you have a chatGPT alternative local AI tool.

In this guide, I'll walk you through some essential Ollama commands, explaining what they do and share some tricks at the end to enhance your experience.

💡
If you're new to Ollama or just getting started, we've already covered a detailed Ollama installation guide for Linux to help you set it up effortlessly.

Checking available commands

Before we dive into specific commands, let's start with the basics. To see all available Ollama commands, run:

ollama --help

This will list all the possible commands along with a brief description of what they do. If you want details about a specific command, you can use:

ollama <command> --help

For example, ollama run --help will show all available options for running models.

Here's a glimpse of essential Ollama commands, which we’ve covered in more detail further in the article.

Command Description
ollama create Creates a custom model from a Modelfile, allowing you to fine-tune or modify existing models.
ollama run <model> Runs a specified model to process input text, generate responses, or perform various AI tasks.
ollama pull <model> Downloads a model from Ollama’s library to use it locally.
ollama list Displays all installed models on your system.
ollama rm <model> Removes a specific model from your system to free up space.
ollama serve Runs an Ollama model as a local API endpoint, useful for integrating with other applications.
ollama ps Shows currently running Ollama processes, useful for debugging and monitoring active sessions.
ollama stop <model> Stops a running Ollama process using its process ID or name.
ollama show <model> Displays metadata and details about a specific model, including its parameters.
ollama run <model> "with input" Executes a model with specific text input, such as generating content or extracting information.
ollama run <model> < "with file input" Processes a file (text, code, or image) using an AI model to extract insights or perform analysis.

1. Downloading an LLM

If you want to manually download a model from the Ollama library without running it immediately, use:

ollama pull <model_name>

For instance, to download Llama 3.2 (300M parameters):

ollama pull phi:2.7b

This will store the model locally, making it available for offline use.

Must Know Ollama Commands for Managing LLMs locally
📋
There are no ways of fetching available model names from Hugging Face. You have to visit Ollama website and get the available model names to use with the pull command.

2. Running an LLM

To begin chatting with a model, use:

ollama run <model_name>

For example, to run a small model like Phi2:

ollama run phi:2.7b
Must Know Ollama Commands for Managing LLMs locally

If you don’t have the model downloaded, Ollama will fetch it automatically. Once it's running, you can start chatting with it directly in the terminal.

Some useful tricks while interacting with a running model:

  • Type /set parameter num_ctx 8192 to adjust the context window.
  • Use /show info to display model details.
  • Exit by typing /bye.

3. Listing installed LLMs

If you’ve downloaded multiple models, you might want to see which ones are available locally. You can do this with:

ollama list

This will output something like:

Must Know Ollama Commands for Managing LLMs locally

This command is great for checking which models are installed before running them.

4. Checking running LLMs

If you're running multiple models and want to see which ones are active, use:

ollama ps

You'll see an output like:

Must Know Ollama Commands for Managing LLMs locally

To stop a running model, you can simply exit its session or restart the Ollama server.

5. Starting the ollama server

The ollama serve command starts a local server to manage and run LLMs.

This is necessary if you want to interact with models through an API instead of just using the command line.

ollama serve
Must Know Ollama Commands for Managing LLMs locally

By default, the server runs on http://localhost:11434/, and if you visit this address in your browser, you'll see "Ollama is running."

Must Know Ollama Commands for Managing LLMs locally

You can configure the server with environment variables, such as:

  • OLLAMA_DEBUG=1 → Enables debug mode for troubleshooting.
  • OLLAMA_HOST=0.0.0.0:11434 → Binds the server to a different address/port.

6. Updating existing LLMs

There is no ollama command for updating existing LLMs. You can run the pull command periodically to update an installed model:

ollama pull <model_name>

If you want to update all the models, you can combine the commands in this way:

ollama list | tail -n +2 | awk '{print $1}' | xargs -I {} ollama pull {}

That's the magic of AWK scripting tool and the power of xargs command.

Here's how the command works (if you don't want to ask your local AI).

Ollama lists all the models and you take the ouput starting at line 2 as line 1 doesn't have model names. And then AWK command gives the first column that has the model name. Now this is passed to xargs command that puts the model name in {} placeholder and thus ollama pull {} runs as ollama pull model_name for each installed model.

7. Custom model configuration

One of the coolest features of Ollama is the ability to create custom model configurations.

For example, let’s say you want to tweak smollm2 to have a longer context window.

First, create a file named Modelfile in your working directory with the following content:

FROM llama3.2:3b
PARAMETER temperature 0.5
PARAMETER top_p 0.9
SYSTEM You are a senior web developer specializing in JavaScript, front-end frameworks (React, Vue), and back-end technologies (Node.js, Express). Provide well-structured, optimized code with clear explanations and best practices.

Now, use Ollama to create a new model from the Modelfile:

ollama create js-web-dev -f Modelfile
Must Know Ollama Commands for Managing LLMs locally

Once the model is created, you can run it interactively:

ollama run js-web-dev "Write a well-optimized JavaScript function to fetch data from an API and handle errors properly."
Must Know Ollama Commands for Managing LLMs locally

If you want to tweak the model further:

  • Adjust temperature for more randomness (0.7) or strict accuracy (0.3).
  • Modify top_p to control diversity (0.8 for stricter responses).
  • Add more specific system instructions, like "Focus on React performance optimization."

Some other tricks to enhance your experience

Ollama isn't just a tool for running language models locally, it can be a powerful AI assistant inside a terminal for a variety of tasks.

Like, I personally use Ollama to extract info from a document, analyze images and even help with coding without leaving the terminal.

💡
Running Ollama for image processing, document analysis, or code generation without a GPU can be excruciatingly slow.

Summarizing documents

Ollama can quickly extract key points from long documents, research papers, and reports, saving you from hours of manual reading.

That said, I personally don’t use it much for PDFs. The results can be janky, especially if the document has complex formatting or scanned text.

If you’re dealing with structured text files, though, it works fairly well.

ollama run phi "Summarize this document in 100 words." < french_revolution.txt

Image analysis

Though Ollama primarily works with text, some vision models (like llava or even deepseek-r1) are beginning to support multimodal processing, meaning they can analyze and describe images.

This is particularly useful in fields like computer vision, accessibility, and content moderation.

ollama run llava:7b "Describe the content of this image." < cat.jpg

Code generation and assistance

Debugging a complex codebase? Need to understand a piece of unfamiliar code?

Instead of spending hours deciphering it, let Ollama have a look at it. 😉

ollama run phi "Explain this algorithm step-by-step." < algorithm.py

Additional resources

If you want to dive deeper into Ollama or are looking to integrate it into your own projects, I highly recommend checking out freeCodeCamp’s YouTube video on the topic.

It provides a clear, hands-on introduction to working with Ollama and its API.

Conclusion

Ollama makes it possible to harness AI on your own hardware. While it may seem overwhelming at first, once you get the hang of the basic commands and parameters, it becomes an incredibly useful addition to any developer's toolkit.

That said, I might not have covered every single command or trick in this guide, I’m still learning myself!

If you have any tips, lesser-known commands, or cool use cases up your sleeve, feel free to share them in the comments.

I feel that this should be enough to get you started with Ollama, it’s not rocket science. My advice? Just fiddle around with it.

Try different commands, tweak the parameters, and experiment with its capabilities. That’s how I learned, and honestly, that’s the best way to get comfortable with any new tool.

Happy experimenting! 🤖

by: Abhishek Prakash
Thu, 13 Mar 2025 04:27:14 GMT


FOSS Weekly #25.11: Limit Battery Charging, File Searching, Sudo Tweaks and More Linux Stuff

Keeping your laptop always plugged-in speeds up the deterioration of its battery life. But if you are using a docking station, you don't have the option to unplug the power cord.

Thankfully, you can employ a few tricks to limit battery charging levels.

How to Limit Charging Level in Linux (and Prolong Battery Life)
Prolong your laptop’s battery life in long run by limiting the charging to 80%.
FOSS Weekly #25.11: Limit Battery Charging, File Searching, Sudo Tweaks and More Linux Stuff

💬 Let's see what else you get in this edition

  • A new COSMIC-equipped Linux distro.
  • Android's native Linux terminal rolling out.
  • File searching
  • And other Linux news, tips, and, of course, memes!
  • This edition of FOSS Weekly is supported by Zep's Graphiti.

✨ Zep’s Graphiti – Open-Source Temporal Knowledge Graph for AI Agents

Traditional systems retrieve static documents, not evolving knowledge. Zep’s Graphiti is an open-source temporal knowledge graph that helps AI agents track conversations and structured data over time—enabling better memory, deeper context, and more accurate responses.

Built to evolve, Graphiti goes beyond static embeddings, powering AI that learns. Open-source, scalable, and ready to deploy.

Explore Zep’s Graphiti on GitHub and contribute!

GitHub - getzep/graphiti: Build and query dynamic, temporally-aware Knowledge Graphs
Build and query dynamic, temporally-aware Knowledge Graphs - getzep/graphiti
FOSS Weekly #25.11: Limit Battery Charging, File Searching, Sudo Tweaks and More Linux Stuff

📰 Linux and Open Source News

The Nova NVIDIA GPU driver is shaping up nicely, with a Linux kernel debut imminent.

Nvidia Driver Written in Rust Could Arrive With Linux Kernel 6.15
The Nova GPU driver is still evolving, but a kernel debut is near.
FOSS Weekly #25.11: Limit Battery Charging, File Searching, Sudo Tweaks and More Linux Stuff

🧠 What We’re Thinking About

Those naysayers who say open source software doesn't produce results need to read this.

Open Source Fueled The Oscar-Winning ‘Flow’
A great achievement pulled off using open source software!
FOSS Weekly #25.11: Limit Battery Charging, File Searching, Sudo Tweaks and More Linux Stuff

🧮 Linux Tips, Tutorials and More

Searching for files in Linux is synonymous to commands like find, xargs and grep. But not all of us Linux users are command line champs, right? Thankfully, even the file explorers like Nautilus have good search features.

If you want something more than that, there are a few GUI tools like AngrySearch for this purpose.

And some sudo tips ;)

7 Ways to Tweak Sudo Command in Linux
Unleash the power of sudo with these tips 💪
FOSS Weekly #25.11: Limit Battery Charging, File Searching, Sudo Tweaks and More Linux Stuff

👷 Homelab and Maker's Corner

Take the first step towards a homelab with Raspberry Pi and CasaOS.

Enjoying Self-Hosting Software Locally With CasaOS and Raspberry Pi
I used CasaOS for self-hosting popular open source services on a Raspberry Pi. Here’s my experience.
FOSS Weekly #25.11: Limit Battery Charging, File Searching, Sudo Tweaks and More Linux Stuff

✨ Apps Highlight

Tired of Notion? Why not give this open source alternative a chance?

AFFiNE: A Truly Wonderful Open Source Notion Alternative With a Focus on Privacy
A solid open source rival to Notion and Miro. Let us take a look!
FOSS Weekly #25.11: Limit Battery Charging, File Searching, Sudo Tweaks and More Linux Stuff

📽️ Videos I am Creating for You

In the latest video, I show how easy it is to create a multiboot Linux USB.

🧩 Quiz Time

How much do you know of the Linux boot process? We have a crossword to jog your memory.

Crossword Quiz on Linux Boot Process
Test your knowledge of the Linux boot process in this fun and interactive crossword.
FOSS Weekly #25.11: Limit Battery Charging, File Searching, Sudo Tweaks and More Linux Stuff

💡 Quick Handy Tip

On Brave, you can search the history/bookmarks/tabs etc. from the address bar. Simply type @ in the address bar and start searching.

FOSS Weekly #25.11: Limit Battery Charging, File Searching, Sudo Tweaks and More Linux Stuff

🤣 Meme of the Week

Are you even a real Linux user if you aren't excited when you see a Penguin? 🐧🤔

FOSS Weekly #25.11: Limit Battery Charging, File Searching, Sudo Tweaks and More Linux Stuff

🗓️ Tech Trivia

TRADIC, developed by Bell Labs in 1954, was one of the first transistorized computers. It used nearly 800 transistors, significantly reducing power consumption.

TRADIC operated on less than 100 watts, a fraction of what vacuum tube computers needed at that time. Initially, a prototype, it evolved into an airborne version for the U.S. Air Force. This innovation paved the way for future low-power computing systems.

🧑‍🤝‍🧑 FOSSverse Corner

Pro FOSSer Ernie dove into customizing his terminal with Starship.

My most recent adventure: Customizing my terminal prompt using Starship!
I read an item in today’s (March 6, 2025) ZDNet newsletter titled “Why the Starship prompt is better than your default on Linux and MacOS”. I was intrigued, so I followed the author’s instructions, and installed starship on my Garuda GNU/Linux system. Interestingly, my prompt did not change following installation and activation of starship, so I asked if Garuda uses starship to customize the terminal prompt in Firefox (I think Firefox uses the Google search engine), and the AI responded yes, ex…
FOSS Weekly #25.11: Limit Battery Charging, File Searching, Sudo Tweaks and More Linux Stuff

❤️ With love

Share it with your Linux-using friends and encourage them to subscribe (hint: it's here).

Share the articles in Linux Subreddits and community forums.

Follow us on Google News and stay updated in your News feed.

Opt for It's FOSS Plus membership and support us 🙏

Enjoy FOSS 😄

by: Abhishek Prakash
Tue, 11 Mar 2025 12:50:25 GMT


Prolong Laptop Battery Life in Linux by Limiting Charging Levels

In case you didn't know it already, regularly charging the battery to 100% or fully discharging it puts your battery at stress and may lead to poor battery life in long run.

I am not making claims on my own. This is what the experts and even the computer manufactures tell you.

As you can see in the official Lenovo video above, continuous full charging and discharging accelerate the deterioration of battery health. They also tell you that the optimum battery charging range is 20-80%.

Prolong Laptop Battery Life in Linux by Limiting Charging Levels

Although Lenovo also tells you that battery these days are made to last longer than your computer. Not sure what's their idea of an average computer lifespan, I would prefer to keep the battery life healthy for a longer period and thus extract a good performance from my laptop as long as it lives.

I mean, it's all about following the best practices, right?

Now, you could manually plug and unplug the power cord but it won't work if you are connected to a docking station or use a modern monitor to power your laptop.

What can you do in that case? Well, to control the battery charging on Linux, you have a few options:

  • KDE Plasma has this as an in-built feature. That's why KDE is ❤️
  • GNOME has extensions for this. Typical GNOME thing.
  • There are command line tools to limit battery charging levels. Typical Linux thing 😍

Let's see them one by one.

📋
Please verify which desktop environment you are using and then follow the appropriate method.

Limit laptop battery charging in KDE

If you are using KDE Plasma desktop environment, all you have to do is to open the Settings app and go to Power Management. In the Advanced Power Settings, you'll see the battery levels settings.

I like that KDE informs the users about reduced battery life due to overcharging. It even sets the charging levels at 50-90% by default.

Prolong Laptop Battery Life in Linux by Limiting Charging Levels

Of course, you can change the limit to something like 20-80. Although, I am not a fan of the lower 20% limit and I prefer 40-80% instead.

Prolong Laptop Battery Life in Linux by Limiting Charging Levels

That's KDE for you. Always caring for its kusers.

💡
It is possible that the battery charging control feature may need to be enabled from the BIOS. Look for it under power management settings in BIOS.

Set battery charging limit in GNOME

Like most other things, GNOME users can achieve this by using a GNOME extension.

There is an extension called ThinkPad Battery Threshold for this purpose. Although it mentions ThinkPad everywhere, you don't need to own a Lenovo ThinkPad to use it.

From what I see, the command it runs should work for most, if not all, laptops from different manufacturers.

I have a detailed tutorial on using GNOME Extensions, so I won't repeat the steps.

Use the Extension Manager tool to install ThinkPad Battery Threshold extension.

Once the extension is enabled, you can find it in the system tray. On the first run, it shows red exclamation mark because it is not enabled yet.

Prolong Laptop Battery Life in Linux by Limiting Charging Levels

If you click on the Threshold settings, you will be presented with configuration options.

Prolong Laptop Battery Life in Linux by Limiting Charging Levels

Once you have set the desired values, click on apply. Next, you'll have to click Enable thresholds. When you hit that, it will ask for your password.

At this screen, you can have a partial hint about the command it is going to run it.

Prolong Laptop Battery Life in Linux by Limiting Charging Levels
📋
From what I experienced, while it does set an upper limit, it didn't set the lower limit for my Asus Zenbook. I'll check it on my Tuxedo laptop later. Meanwhile, if you try it on some other device, do share if it works for the lower charging limit as well.

Using command line to set battery charging thresholds

🚧
You must have basic knowledge of the Linux command line. That's because there are many moving parts and variables for this part.

Here's the thing. For most laptops, there should be file(s) to control battery charging in /sys/class/power_supply/BAT0/ directory but the file names are not standard. It could be charge_control_end_threshold or charge_stop_threshold or something similar.

Also, you may have more than one battery. For most laptops, it will be BAT0 that is the main battery but you need to make sure of that.

Install the upower CLI tool on your distribution and then use this command:

upower --enumerate

It will show all the power devices present on the system:

/org/freedesktop/UPower/devices/battery_BAT0
/org/freedesktop/UPower/devices/line_power_AC0
/org/freedesktop/UPower/devices/line_power_ucsi_source_psy_USBC000o001
/org/freedesktop/UPower/devices/line_power_ucsi_source_psy_USBC000o002
/org/freedesktop/UPower/devices/headphones_dev_BC_87_FA_23_77_B2
/org/freedesktop/UPower/devices/DisplayDevice

You can find the battery name here.

The next step is to look for the related file in /sys/class/power_supply/BAT0/ directory.

If you find a file starting with charge, note down its name and then add the threshold value to it.

In my case, it is /sys/class/power_supply/BAT0/charge_control_end_threshold, so I set an upper threshold of 80 in this way:

echo 80 | sudo tee /sys/class/power_supply/BAT0/charge_control_end_threshold

You could also use nano editor to edit the file but using tee command is quicker here.

💡
You can also use tlp for this purpose by editing the /etc/tlp.conf file.

Conclusion

See, if you were getting 10 hours of average battery life on a new laptop, it is normal to expect it to be around 7-8 hours after two years. But if you leave it at full charge all the time, it may come down to 6 hours instead of 7-8 hours. The numbers are for example purpose.

This 20-80% range is what the industry recommends these days. On my Samsung Galaxy smartphone, there is a "Battery protection" setting to stop charging the device after 80% of the charge.

I wish a healthy battery life for your laptop 💻

by: Abhishek Kumar
Mon, 10 Mar 2025 11:05:22 GMT


Enjoying Self-Hosting Software Locally With CasaOS and Raspberry Pi

If you are someone interested in self-hosting, home automation, or just want to tinker with your Raspberry Pi, you have various options to get started.

But, if you are new, and want something easy to get you up to speed, CasaOS is what you can try.

CasaOS isn't your ordinary operating system. It is more like a conductor, bringing all your favorite self-hosted applications together under one roof.

Built around the Docker ecosystem, it simplifies the process of managing various services, apps, and smart devices from a browser-based dashboard.

Enjoying Self-Hosting Software Locally With CasaOS and Raspberry Pi
CasaOS interface running on ZimaBoard

Originally developed by the makers of ZimaBoard, CasaOS makes the deployment of tools like Jellyfi, Plex, Immich, PhotoPrism a matter of a few clicks.

ZimaBoard Turned My Dream of Owning a Homelab into Reality
Get control of your data by hosting open source software easily with this plug and play homelab device.
Enjoying Self-Hosting Software Locally With CasaOS and Raspberry Pi

Let us find out more and explore how CasaOS can help can transform our simple Raspberry Pi into a powerful personal cloud.

What is CasaOS?

Enjoying Self-Hosting Software Locally With CasaOS and Raspberry Pi

Think of CasaOS (Casa being "home" in Spanish) as a home for your Raspberry Pi or similar device.

It sits on top of your existing operating system, like Ubuntu or Raspberry Pi OS, and transforms it into a self-hosting machine.

CasaOS simplifies the process of installing and managing applications you'd typically run through Docker containers by blending the user-friendliness of docker management platform like Portainer.

It acts as the interface between you and your applications, providing a sleek, user-friendly dashboard that allows you to control everything from one place.

You can deploy various applications, including media servers like Jellyfin or file-sharing platforms like Nextcloud, all through its web-based interface.

Installing CasaOS on Raspberry Pi

Installing CasaOS on a Raspberry Pi is as easy as running a single bash script. But first, let’s make sure your Raspberry Pi is ready:

💡
Feeling a bit hesitant about running scripts? CasaOS offers a live demo on their website (username: casaos, password: casaos) to familiarize yourself with the interface before taking the plunge.

Ensure your Pi’s operating system is up-to-date by running the following commands:

sudo apt update && sudo apt upgrade -y

If you do not have curl installed already, install it by running:

sudo apt install curl -y

Now, grab the installation script from the official website and run it:

curl -fsSL https://get.casaos.io | sudo bash
Enjoying Self-Hosting Software Locally With CasaOS and Raspberry Pi

Access the CasaOS web interface

After the installation completes, you will receive the IP address in the terminal to access CasaOS from your web browser.

Simply type this address into your browser or if you are unsure type hostname -I on the Raspberry Pi to get your IP, and you will be greeted by the CasaOS welcome screen.

Enjoying Self-Hosting Software Locally With CasaOS and Raspberry Pi


The initial setup process will guide you through creating an account and getting started with your personal cloud.

Enjoying Self-Hosting Software Locally With CasaOS and Raspberry Pi

Getting Started

Once inside, CasaOS welcomes you with a clean, modern interface. You’ll see system stats like CPU usage, memory, and disk space upfront in widget-style panels.

There’s also a search bar for easy navigation, and at the heart of the dashboard lies the app drawer—your gateway to all installed and available applications.

Enjoying Self-Hosting Software Locally With CasaOS and Raspberry Pi

CasaOS comes pre-installed with two main apps: Files and the App Store. While the Files app gives you easy access to local storage on your Raspberry Pi, the App Store is where the magic really happens.

From here, you can install various applications with just a few clicks.

Exploring the magical app store

The App Store is one of the main attractions of CasaOS. It offers a curated selection of applications that can be deployed directly on your Pi with minimal effort.

Here’s how you can install an app:

  1. Go to the app store
    From the dashboard, click on the App Store icon.
Enjoying Self-Hosting Software Locally With CasaOS and Raspberry Pi
  1. Browse or search for an app
    Scroll through the list of available apps or use the search bar to find what you’re looking for.
Enjoying Self-Hosting Software Locally With CasaOS and Raspberry Pi
  1. Click install
    Once you find the app you want, simply click on the installation button, and CasaOS will handle the rest.
Enjoying Self-Hosting Software Locally With CasaOS and Raspberry Pi

The app will appear in your app drawer once the installation is complete.

Enjoying Self-Hosting Software Locally With CasaOS and Raspberry Pi

It is that simple.

💡
Container-level settings for the apps can be accessed by right clicking the app icon in the dashboard. It lets you map (docker volume) directories on the disk with the app. For example, if you are using Jellyfin, you should map your media folder in the Jellyfin (container) setting. You should see it in the later sections of this tutorial.

Access

Once you have installed applications in CasaOS, accessing them is straightforward, thanks to its intuitive design.

All you have to do is click on the Jellyfin icon, and it will automatically open up in a new browser window.

Enjoying Self-Hosting Software Locally With CasaOS and Raspberry Pi

Each application you install behaves in a similar way, CasaOS takes care of the back-end configurations to make sure the apps are easily accessible through your browser.

No need to manually input IP addresses or ports, as CasaOS handles that for you.

For applications like Jellyfin or any self-hosted service, you will likely need to log in with default credentials (which you can and should change after the first use).

In the case of Jellyfin, the default login credentials were:

  • Username: admin
  • Password: admin

Of course, CasaOS allows you to customize these credentials when setting up the app initially, and it's always a good idea to use something more secure.

My experience with CasaOS

For this article, I installed a few applications on CasaOS tailored to my homelab needs:

I spent a full week testing these services in my daily routine and jotted down some key takeaways, both good and bad.

While CasaOS offers a smooth experience overall, there are some quirks that require you to have Docker knowledge to work with them.

💡
I faced a few issues that were caused by mounting external drives and binding them to the CasaOS apps. I solved them by automounting an external disk.

Jellyfin media server: Extra drive mount issue

When I first set up Jellyfin on day one, it worked well right out of the box. However, things got tricky once I added an extra drive for my media library.

I spent a good chunk of time managing permissions and binding volumes, which was definitely not beginner-friendly.

For someone new to Docker or CasaOS, the concept of binding volumes can be perplexing. You don’t just plug in the drive and expect it to work, it requires configuring how your media files will link to the Jellyfin container.

Enjoying Self-Hosting Software Locally With CasaOS and Raspberry Pi
You need to edit the fstab file if you want it to mount at the exact same location every time

Even after jumping through those hoops, it wasn’t smooth sailing. One evening, I accidentally turned off the Raspberry Pi.

When it booted back up, the additional drive wasn’t mounted automatically, and I had to go through the whole setup process again ☹️

So while Jellyfin works, managing external drives in CasaOS feels like it could be a headache for new users.

Cloudflared connection drops

I used Cloudflare Tunnel to access the services from outside the home network.

It was a bit of a mixed bag. For the most part, it worked fine, but there were brief periods where the connection was not working even if said that it was connected.

The connection would just drop unexpectedly, and I’d have to fiddle around with it to get things running again.

Enjoying Self-Hosting Software Locally With CasaOS and Raspberry Pi

After doing some digging, I found out that the CLI tool for Cloudflare Tunnels had recently been updated, so that might’ve been the root of the issue.

Hopefully, it was a temporary glitch, but it is something to keep in mind if you rely on stable connections.

Transmission torrent Client: Jellyfin's Story Repeats

💡
The default username & password is casaos. The tooltip for some applications contain such information. You can also edit them and add notes for the application.

Transmission was solid for saving files locally, but as soon as I tried adding the extra drive to save files on my media library, I hit the same wall as with Jellyfin.

The permissions errors cropped up, and again, the auto-mount issue reared its head.

So, I would say it is fine for local use if you’re sticking to one drive, but if you plan to expand your storage, be ready for some trial and error.

Nextcloud: Good enough but not perfect

Setting up a basic Nextcloud instance in CasaOS was surprisingly easy. It was a matter of clicking the install button, and within a few moments, I had my personal cloud up and running.

However, if you’re like me and care about how your data is organized and stored, there are a few things you’ll want to keep in mind.

When you first access your Nextcloud instance, it defaults to using SQLite as the database, which is fine for simple, small-scale setups.

But if you’re serious about storing larger files or managing multiple users, you’ll quickly realize that SQLite isn’t the best option. Nextcloud itself warns you that it’s not ideal for handling larger loads, and I would highly recommend setting up a proper MySQL or MariaDB database instead.

Enjoying Self-Hosting Software Locally With CasaOS and Raspberry Pi

Doing so will give you more stability and performance in the long run, especially as your data grows.

Beyond the database choice, I found that even after using the default setup, Nextcloud’s health checks flagged several issues.

Enjoying Self-Hosting Software Locally With CasaOS and Raspberry Pi

For example, it complained about the lack of an HTTPS connection, which is crucial for secure file transfers.

If you want your Nextcloud instance to be properly configured and secure, you'll need to invest some time to set up things like:

  • Setting up secure SSL certificate
  • Optimizing your database
  • Handling other backend details that aren’t obvious to a new user.

So while Nextcloud is easy to get running initially, fine-tuning it for real-world use takes a bit of extra work, especially if you are focused on data integrity and security.

Custom WordPress stack: Good stuff!

Now, coming to the WordPress stack I manually added, this is where CasaOS pleasantly surprised me.

While I still prefer using Portainer to manage my custom Docker stacks, I have to admit that CasaOS has put in great effort to make the process intuitive.

Enjoying Self-Hosting Software Locally With CasaOS and Raspberry Pi

It is clear they’ve thought about users who want to deploy their own stacks using Docker Compose files or Docker commands.

Adding the stack was simple, and the CasaOS interface made it relatively easy to navigate.

Final thoughts

After using CasaOS for several days, I can confidently say it’s a tool with immense potential. The ease of deploying apps like Jellyfin and Nextcloud makes it a breeze for users who want a no-hassle, self-hosted solution.

However, CasaOS is not perfect yet. The app store, while growing, feels limited, and those looking for a more customizable experience may find the lack of advanced Docker controls frustrating at first.

Learn Docker: Complete Beginner’s Course
Learn Docker, an important skill to have for any DevOps and modern sysadmin. Learn all the essentials of Docker in this series.
Enjoying Self-Hosting Software Locally With CasaOS and Raspberry Pi

That said, CasaOS succeeds in making Docker and self-hosting more accessible to the masses.

For homelab enthusiasts like me, it is a great middle ground between the complexity of Docker CLI and the bloated nature of full-blown home automation systems.

Whether you are a newcomer or a seasoned tinker, CasaOS is worth checking out, if you are not afraid to deal with a few bumps along the way.

by: Community
Sat, 08 Mar 2025 08:54:21 GMT


From OpenBSD to Linux: How Pledge can Enhance Linux Security

Imagine a scenario, you downloaded a new binary called ls from the internet. The application could be malicious by intention. Binary files are difficult to trust and run over the system. It could lead to a system hijacking attack, sending your sensitive files and clipboard information to the malicious server or interfere with the existing process of your machine.

Won’t it be great if you’ve the tool to run and test the application within the defined security parameter. Like, we all know, ls command list the files in the current working directory. So, why would it require a network connection to operate? Does it make sense?

That’s where the tool, Pledge, comes in. Pledge restricts the system calls a program can make. Pledge is natively supported on OpenBSD systems. Although it isn’t officially supported on Linux systems, I’ll show you a cool hack to utilize pledge on your Linux systems.

🚧
As you can see, this is rather an advanced tool for sysadmins, network engineers and people in the network security field. Most desktop Linux users would not need something like this but that does not mean you cannot explore it out of curiosity.

What makes this port possible?

Thanks to the remarkable work done by Justine Tunney. She is the core developer behind the project- Cosmopolitan Libc.

Cosmopolitan makes it a bridge for compiling a c programs for 7 different platforms (Linux + Mac + Windows + FreeBSD + OpenBSD 7.3 + NetBSD + BIOS) at one go.

Utilizing Libc Cosmopolitan, she was able to port OpenBSD Pledge to the Linux system. Here's the nice blog done by her.

📋
A quick disclaimer: Just because you can compile a C program for 7 different platforms doesn’t mean you would be able to successfully run on all these platforms. You need to handle program dependencies as well. For instance, Iptables uses Linux sockets, so you can’t expect it to work magically on Windows systems unless you come up with a way to establish Linux socket networking to Windows.

Restrict system calls() with Pledge

You might be surprised to know one single binary can run on 7 different platforms - Windows, Linux, Mac, FreeBSD, OpenBSD, NetBSD and BIOS.

These binary files are called Actually Portable Executable (APE). You can check out this blog for more information. These binary files have the .com suffix and it’s necessary to work.

This guide will show how to use pledge.com binary on your Linux system to restrict system calls while launching any binaries or applications.

Step 1: Download pledge.com

You can download pledge-1.8.com from the url- http://justine.lol/pledge/pledge-1.8.com.

You can rename the file pledge-1.8.com to pledge.com.

Step 2: Make it executable

Run this command to make it executable.

chmod +x ./pledge.com

Step 3: Add pledge.com to the path

A quick way to accomplish this is to move the binary in standard /usr/local/bin/ location.

sudo mv ./pledge.com /usr/local/bin

Step 4: Run and test

pledge.com curl http://itsfoss.com

I didn’t assign any permission (called promises) to it so it would fail as expected. But it gives us a hint on what system calls are required by the binary ‘curl’ when it is run.

From OpenBSD to Linux: How Pledge can Enhance Linux Security

With this information, you can see if a program is requesting a system call that it should not. For example, a file explorer program asking for dns. Is it normal?

Curl is a tool that deals with URLs and indeed requires those system calls.

Let's assign promises using the -p flag. I'll explain what each of these promises does in the next section.

pledge.com -p 'stdio rpath inet dns tty sendfd recvfd' \
curl -s http://itsfoss.com
From OpenBSD to Linux: How Pledge can Enhance Linux Security
📋
The debug message error:pledge inet for socket is mis-leading. Even a similar open issue exists at the project's GitHub repo. It is evident that after providing these sets of promises "stdio rpath inet dns tty sendfd recvfd" to our curl binary, it works as expected.

It’s successfully redirecting to the https version of our website. Let’s try to see, if with the same set of promises, it can talk to https enabled websites or not.

pledge.com -p 'stdio rpath inet dns tty sendfd recvfd' \
curl -s https://itsfoss.com
From OpenBSD to Linux: How Pledge can Enhance Linux Security

Yeah! It worked.

A quick glance at promises

In the above section, we used 7 promises to make our curl request successful. Here’s a quick glimpse into what each promises intended for:

  • stdio: Allows reading and writing to standard input/output (like printing to the console).
  • rpath: Allows reading files from the filesystem.
  • inet: Allows network-related operations (for example, connecting to a server).
  • dns: Allows resolving DNS queries.
  • tty: Allows access to the terminal.
  • sendfd: Allow sending file descriptors.
  • recvfd: Allow received file descriptors

To know what other promises are supported by the pledge binary, head over to this blog.

Porting OpenBSD pledge() to Linux
Sandboxing for Linux has never been easier.
From OpenBSD to Linux: How Pledge can Enhance Linux Security

Conclusion

OpenBSD’s pledge follows the Least Privilege model. It prevents programs from mis-utilizing system resources. Following this security model, the damage done by a malicious application can be quite limited. Although Linux has seccomp and apparmor in its security arsenal, I find pledge more intuitive and easy to use.

With Actually Portable Executable (APE), Linux users can now enjoy the simplicity of pledge to make their systems more secure. Users can provide more granular control over what processes can do within these environments would add an extra layer of defense.

Author Info

From OpenBSD to Linux: How Pledge can Enhance Linux Security

Bhuwan Mishra is a Fullstack developer, with Python and Go as his tools of choice. He takes pride in building and securing web applications, APIs, and CI/CD pipelines, as well as tuning servers for optimal performance. He also has passion for working with Kubernetes.

by: Abhishek Prakash
Thu, 06 Mar 2025 05:27:13 GMT


FOSS Weekly #25.10: Skype is Dead, GNOME 48 Features, Ubuntu Versions, Nano Guide and More Linux Stuff

Skype is being discontinued by Microsoft on 5th May.

Once a hallmark of the old internet, Skype was already dying a slow death. It just could not keep up with the competition from WhatsApp, Zoom etc despite Microsoft's backing.

While there are open source alternatives to Skype, I doubt if friends and family would use them.

I am not going to miss it, as I haven't used Skype in years. Let's keep it in the museum of Internet history.

Speaking of the old internet, Digg is making a comeback. 20 years back, it was the 'front page of the internet'.

💬 Let's see what else you get in this edition

  • VLC aiming for the Moon.
  • EA open sourcing its games.
  • GNOME 48 features to expect.
  • And other Linux news, tips, and, of course, memes!
  • This edition of FOSS Weekly is supported by ONLYOFFICE.

✨ ONLYOFFICE PDF Editor: Create, Edit and Collaborate on PDFs on Linux

The ONLYOFFICE suite now offers an updated PDF editor that comes equipped with collaborative PDF editing and other useful features.

You can deploy ONLYOFFICE Docs on your Linux server and integrate it with your favourite platform, such as Nextcloud, Moodle and more. Alternatively, you can download the free desktop app for your Linux distro.

Online PDF editor, reader and converter | ONLYOFFICE
View and create PDF files from any text document, spreadsheet or presentation, convert PDF to DOCX online, create fillable PDF forms.
FOSS Weekly #25.10: Skype is Dead, GNOME 48 Features, Ubuntu Versions, Nano Guide and More Linux Stuff

📰 Linux and Open Source News

GNOME 48 is just around the corner, check out what features are coming:

Discover What’s New in GNOME 48 With Our Feature Rundown!
GNOME 48 is just around the corner. Explore what’s coming with it.
FOSS Weekly #25.10: Skype is Dead, GNOME 48 Features, Ubuntu Versions, Nano Guide and More Linux Stuff

🧠 What We’re Thinking About

A German startup has published open source plans for its Nuclear Fusion power plant!

As per the latest desktop market share report, macOS usage has seen a notable dip on Steam.

🧮 Linux Tips, Tutorials and More

New users often get confused with so many Ubuntu versions. This article helps clear the doubt.

Explained: Which Ubuntu Version Should I Use?
Confused about Ubuntu vs Xubuntu vs Lubuntu vs Kubuntu?? Want to know which Ubuntu flavor you should use? This beginner’s guide helps you decide which Ubuntu should you choose.
FOSS Weekly #25.10: Skype is Dead, GNOME 48 Features, Ubuntu Versions, Nano Guide and More Linux Stuff

👷 Homelab and Maker's Corner

As a Kodi user, you cannot miss out on installing add-ons and builds. We also have a list of the best add-ons to spice up your media server.

And you can use virtual keyboard with Raspberry Pi easily.

Using On-screen Keyboard in Raspberry Pi OS
Here’s what you can do to use a virtual keyboard on Raspberry Pi OS.
FOSS Weekly #25.10: Skype is Dead, GNOME 48 Features, Ubuntu Versions, Nano Guide and More Linux Stuff

✨ Apps Highlight

Facing slow downloads on your Android smartphone? Aria2App can help.

Aria2App is a Super Fast Versatile Open-Source Download Manager for Android
A useful open-source download manager for Android
FOSS Weekly #25.10: Skype is Dead, GNOME 48 Features, Ubuntu Versions, Nano Guide and More Linux Stuff

lichess lets you compete with other players in online games of Chess.

📽️ Video I am Creating for You

How much does an active cooler cools down a Raspberry Pi 5? Let's find it out in this quick video.

🧩 Quiz Time

For a change, you can take the text processing command crossword challenge.

Commands to Work With Text Files: Crossword
Solve this crossword with commands for text processing.
FOSS Weekly #25.10: Skype is Dead, GNOME 48 Features, Ubuntu Versions, Nano Guide and More Linux Stuff

💡 Quick Handy Tip

You can play Lofi music in VLC Media Player. First, switch to the Playlist view in VLC by going into ViewPlaylist.

Now, in the sidebar, scroll down and select Icecast Radio Directory. Here, search for Lofi in the search bar.

FOSS Weekly #25.10: Skype is Dead, GNOME 48 Features, Ubuntu Versions, Nano Guide and More Linux Stuff

Now, double-click on any Lo-fi channel to start playing. On the other hand, if you want to listen to music via the web browser, you can use freeCodeCamp.org Code Radio.

🤣 Meme of the Week

You didn't have to join the dark side, Firefox. 🫤

FOSS Weekly #25.10: Skype is Dead, GNOME 48 Features, Ubuntu Versions, Nano Guide and More Linux Stuff

🗓️ Tech Trivia

In 1953, MIT's Whirlwind computer showcased an early form of system management software called "Director," developed by Douglas Ross. Demonstrated at a digital fire control symposium, Director automated resource allocation (like memory, storage, and printing), making it one of the earliest examples of an operating system-like program.

🧑‍🤝‍🧑 FOSSverse Corner

An important question has been raised by one of our longtime FOSSers.

Do we all see the same thing on the internet?
I think we all assume we are seeing the same content on a website. But do we.? Read this quote from an article on the Australian ABC news “Many people are unaware that the internet they see is unique to them. Even if we surf the same news websites, we’ll see different news stories based on our previous likes. And on a website like Amazon, almost every item and price we see is unique to us. It is chosen by algorithms based on what we were previously wanting to buy and willing to pay. There is…
FOSS Weekly #25.10: Skype is Dead, GNOME 48 Features, Ubuntu Versions, Nano Guide and More Linux Stuff

❤️ With love

Share it with your Linux-using friends and encourage them to subscribe (hint: it's here).

Share the articles in Linux Subreddits and community forums.

Follow us on Google News and stay updated in your News feed.

Opt for It's FOSS Plus membership and support us 🙏

Enjoy FOSS 😄

by: Sreenath
Thu, 06 Mar 2025 03:09:13 GMT


Record Windows and Cropped Area in OBS Studio

When it comes to screen recording in Linux or any other operating system, OBS Studio becomes he go-to choice.

It offers all the features baked in for users, ranging from casual screen recorders to advanced streamers.

One such useful feature is to record a part of the screen in OBS Studio. I'll share the detailed steps for Linux users in this tutorial.

🚧
The method mentioned is based on a Wayland session. Also, this is a personal workflow, and if readers have better options, feel free to comment, so that I can improve the article for everyone.

Record an application window in OBS Studio

Before starting, first click on File → Settings from OBS Studio main menu. Here, in the Settings window, go to the Video section and note the Canvas resolution and Output scale resolution for your system.

Record Windows and Cropped Area in OBS Studio
Note Canvas and Output Scale values

This will be helpful when you are reverting in a later step.

Step 1: Create a new source

First, let's create a new source for our recording. Click on the “+” icon on the OBS Studio home screen as shown in the screenshot below. Select “Screen Capture (Pipewire)” option.

📋
For X11 system, this may be Display Capture (XSHM).
Record Windows and Cropped Area in OBS Studio
Click on "+" to add a new source

On the resulting window, give a name to the source and then click OK.

Record Windows and Cropped Area in OBS Studio
Give a name to the source

Once you press OK, you will be shown a dialog box to select the record area.

Step 2: Select the window to record

Here, select the Window option from the top bar.

Record Windows and Cropped Area in OBS Studio
Select the window to be recorded.

Once you click on the Window option, you will be able to see all the open windows listed. Select a window that you want to record from the list, as shown in the screenshot above.

This will give you a dialog box, with a preview of the window being recorded.

Enable the cursor recording (if needed) and click OK.

Record Windows and Cropped Area in OBS Studio
Selected window in preview

Step 3: Crop the video to window size

Now, in the main OBS window, you can see that the application you have selected is not filling the full canvas, in my case 1920×1080.

Record Windows and Cropped Area in OBS Studio
Empty space in canvas

The output will contain this window and the rest of the canvas in black if you keep recording with this setting.

You need to crop the area so that only the necessary part is present on the output file.

For this, right-click on our source and select Resize Output (Source Size) option, as shown below:

Record Windows and Cropped Area in OBS Studio
Resize output source size

Click on Yes, when prompted.

Record Windows and Cropped Area in OBS Studio
Accept Confirmation

As soon as you click Yes, you can see that the canvas is now reduced to the size of the window.

Record Windows and Cropped Area in OBS Studio
Canvas Resized

Step 4: Record the video

You can now start recording the video using the Record button.

Record Windows and Cropped Area in OBS Studio
Start video recording

Once finished, stop recording, and the saved video file won't contain any other part, except the window.

Step 5: Delete the video source

Now that you have recorded the video, let's remove this particular source.

Right-click on the source and select Remove.

Record Windows and Cropped Area in OBS Studio
Remove the source

Step 6: Revert the canvas and output scale

While we were resizing the canvas to the window, the setting has been also changed on your OBS Studio video settings. If left unchanged, your future videos will also be recorded with the reduced size.

So, click on File in the OBS Studio main menu and select Settings.

Record Windows and Cropped Area in OBS Studio
Click on File → Settings

On the Settings window, go to Videos and revert the Base Canvas Resolution and Output Scaled Resolution to your preferred normal values. Then click Apply.

Record Windows and Cropped Area in OBS Studio
Revert Canvas Size to normal

Record an area on the screen in OBS Studio

This is the same process as the one described above, except for the area selection.

Step 1: Create a new source

Click on the plus button on the Sources section in OBS Studio and select Screen Capture.

Record Windows and Cropped Area in OBS Studio
Select Screen Capture

Name the source and click OK.

Step 2: Select a region

On the area selection dialog box, click on Region. From the section, select Select Region option.

Record Windows and Cropped Area in OBS Studio
Select Region

Notice the cursor has now changed to a plus sign. Drag the area you want to record.

Record Windows and Cropped Area in OBS Studio
Select Area to Record

You can see that the preview now has the selected area. Don't forget to enable the cursors, if needed.

It is normal that the canvas is way too big and your video occupies only a part of it.

Record Windows and Cropped Area in OBS Studio
Canvas Size Mismatch

Step 3: Resize the source

Like in the previous section, right-click on the source and select Resize output option.

Record Windows and Cropped Area in OBS Studio
Resize Output to Area Capture

Step 4: Record and revert the settings

Start recording the video. Once it is completed, save the recording and remove the source. Revert the canvas and output scale settings, as shown in step 6 of the previous section.

💬 Hope this guide has helped you record with OBS Studio. Please let me know if this tutorial helped you or if you need further help.

by: Abhishek Prakash
Wed, 05 Mar 2025 03:12:16 GMT


Using On-Screen Keyboard in Raspberry Pi OS

From Kiosk projects to homelab dashboards, there are numerous usage of a touch screen display with Raspberry Pi.

And it makes total sense to use the on-screen keyboard on the touch device rather than plugging in a keyboard and mouse.

Thankfully, the latest versions of Raspberry Pi OS provide a simple way to install and use the on-screen keyboard.

Using On-Screen Keyboard in Raspberry Pi OS
On-screen keyboard on Raspberry Pi

Let me show how you can install the on-screen keyboard support on Raspberry Pi OS.

📋
I am using the DIY Touchscreen by SunFounder (partner link). It's an interesting display that is also compatible with other SBCs. I'll be doing its full review next week. The steps should also work on other touch screens, too.
SunFounder Latest 10 Inch DIY Touch Screen All-In-One Solution for Raspberry Pi 5, IPS HD 1280x800 LCD, Built-In USB-C PD 5.1V/5A Output, HDMI, 10-point, No Driver, Speakers, for RPi 5/4/3/Zero 2W
This SunFounder Touch Screen is a 10-point IPS touch screen in a 10.1″ big size and with a high resolution of 1280x800, bringing you perfect visual experience. It works with various operating systems including Raspberry Pi OS, Ubuntu, Ubuntu Mate, Windows, Android, and Chrome OS.
Using On-Screen Keyboard in Raspberry Pi OS

Partner Link

Just check if you already have the on-screen keyboard support

Raspberry Pi OS Bookworm and later versions include the Squeekboard software for the on-screen keyboard feature.

Now, this package may already be installed by default. If you open a terminal and touch the interface and it brings the keyboard, you have everything set already.

It is also possible that it is installed but not enabled.

Go to the menu, then Preferences and open Raspberry Pi config tool. In the display tab, see if you can change the settings for the on-screen keyboard.

Using On-Screen Keyboard in Raspberry Pi OS
On-screen keyboard support already installed on Raspberry Pi

If you tap the on-screen keyboard settings and it says, "A virtual keyboard is not installed", you will have to install the software first. The next section details the steps.

Using On-Screen Keyboard in Raspberry Pi OS
Virtual Keyboard is not installed

Getting on-screen keyboard in Raspberry Pi OS Bookworm

🚧
You'll need a physical keyboard and mouse for installing the required package If you cannot connect one, you could try to SSH into the Pi.

Update the package cache of your Raspberry Pi first:

sudo apt update

The squeekboard package provides the virtual keyboard in Debian. Install it using the command below:

sudo apt install squeekboard

Once installed, click on the menu and start Raspberry Pi Configuration from the Preferences.

Using On-Screen Keyboard in Raspberry Pi OS
Access Raspberry Pi Configuration

In the Raspberry Pi Configuration tool, go to the Display tab and touch it.

Using On-Screen Keyboard in Raspberry Pi OS

You'll see three options:

  • Enabled always: The on-screen keyboard will be always accessible through the top panel, whether you are using touchscreen or not.
  • Enabled if touchscreen found: The on-screen keyboard is only accessible when it detects a touchscreen.
  • Disabled: Virtual keyboard won't be accessible at all.

Out of these three, you'll be tempted to go for the 'Enabled if touchscreen found'.

However, it didn't work for me. I opted for Enabled always instead.

But not all applications will automatically bring up the on-screen keyboard. In my case, Chromium didn't play well. Thankfully, the on-screen keyboard icon at top panel lets you access it at will.

Using On-Screen Keyboard in Raspberry Pi OS
Virtual keyboard comes up for supported application but it is also accessible from top panel

And this way, you can enjoy the keyboard on a touchscreen.

Conclusion

For older versions of Raspberry Pi OS, you could also go with the matchbox-keyboard package.

sudo apt install matchbox-keyboard

Since Squeekboard is for Wayland, perhaps Matchbox will work on Xorg display server.

The official documents of SunFounder's Touchscreen mentions that Squeekboard is installed by default in Raspberry Pi OS but that was not the case for me.

Installing it was matter of one command and then the virtual keyboard was up and running. This is tested on Raspberry Pi OS but since Squeekboard is available for Wayland in general, it might work on other operating systems, too.

💬 Did it work for you? If yes, a simple 'thank you' will encourage me. If not, please provide the details and I'll try to help you.

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.