Jump to content

Blogger

Blog Bot
  • Joined

  • Last visited

    Never

Blog Entries posted by Blogger

  1. by: Sourav Rudra
    Thu, 13 Nov 2025 12:13:58 GMT

    Nitrux is a Debian-based Linux distribution that has always stood out for its bold design choices. It even made our list of the most beautiful Linux distributions.
    Earlier this year, the project made a significant announcement. They discontinued its custom NX Desktop and the underlying KDE Plasma base, prioritizing a Hyprland desktop experience combined with their in-house developed app distribution methods.
    Now, the first major release reflecting this redefined approach is finally here.
    🆕 Nitrux 5.0.0: What's New?

    The release uses OpenRC 0.63 as its init system instead of systemd. This is paired with either Liquorix kernel 6.17.7 or a CachyOS-patched kernel, depending on your hardware, and the desktop experience is Wayland-only. KDE Plasma, KWin, and SDDM are gone.
    In their place, you get Hyprland with Waybar for the panel, Crystal Dock for application launching, and greetd as the login manager, and QtGreet as the display manager. Wofi serves as the application launcher, while wlogout handles logout actions.
    Nitrux 5.0.0 ships with an immutable root filesystem powered by NX Overlayroot. This provides system stability and rollback capabilities through the Nitrux Update Tool System (nuts).
    Plus, there is Nitrux's new approach to software management. NX AppHub and AppBoxes are now the primary methods for installing applications. Flatpak and Distrobox remain available as complementary options.
    There are many updated apps and tooling in this release too:
    Podman 5.6.1 Docker 26.1.5 Git 2.51.0 Python 3.13.7 OpenRazer 3.10.3 MESA 25.2.3 BlueZ 5.84 PipeWire 1.4.8 The developers are clear about who Nitrux is for. It is designed for users who see configuration as empowerment, not inconvenience. This isn't a distribution trying to please everyone.
    The team put it this way in their announcement:
    📥 Download Nitrux 5.0.0
    The nitrux-contemporary-cachy-nvopen ISO is designed for NVIDIA hardware. It includes the NVIDIA Open Kernel Module and uses the CachyOS-patched kernel.
    The nitrux-contemporary-liquorix-mesa ISO targets AMD and Intel graphics. It ships with the Liquorix kernel and MESA drivers. Both versions are also available through SourceForge.
    Nitrux 5.0 (SourceForge)A fresh installation is strongly recommended for this release. Updates from Nitrux 3.9.1 to 5.0.0 are not supported. Future updates will be delivered through the Nitrux Update Tool System.
    Also, virtual machines are not supported natively, as the team removed many VM-specific components. You can learn more in the release notes.
    Suggested Read 📖
    Here are the Most Beautiful Linux Distributions in 2025Aesthetically pleasing? Customized out of the box? You get the best of both worlds in this list.It's FOSSAnkush Das
  2. by: Nitij Taneja
    Thu, 13 Nov 2025 09:50:29 GMT

    Introduction
    In the rapidly evolving landscape of Artificial Intelligence, Retrieval-Augmented Generation (RAG) has emerged as a pivotal technique for enhancing the factual accuracy and relevance of Large Language Models (LLMs). By enabling LLMs to retrieve information from external knowledge bases before generating responses, RAG mitigates common issues such as hallucination and outdated information.
    However, traditional RAG approaches often rely on vector-based similarity searches, which, while effective for broad retrieval, can sometimes fall short in capturing the intricate relationships and contextual nuances present in complex data. This limitation can lead to the retrieval of fragmented information, hindering the LLM's ability to synthesize truly comprehensive and contextually appropriate answers.
    Enter Graph RAG, a groundbreaking advancement that addresses these challenges by integrating the power of knowledge graphs directly into the retrieval process. Unlike conventional RAG systems that treat information as isolated chunks, Graph RAG dynamically constructs and leverages knowledge graphs to understand the interconnectedness of entities and concepts.
    This allows for a more intelligent and precise retrieval mechanism, where the system can navigate relationships within the data to fetch not just relevant information, but also the surrounding context that enriches the LLM's understanding. By doing so, Graph RAG ensures that the retrieved knowledge is not only accurate but also deeply contextual, leading to significantly improved response quality and a more robust AI system.
    This article will delve into the core principles of Graph RAG, explore its key features, demonstrate its practical applications with code examples, and discuss how it represents a significant leap forward in building more intelligent and reliable AI applications.
    Key Features of Graph RAG
    Graph RAG distinguishes itself from traditional RAG architectures through several innovative features that collectively contribute to its enhanced retrieval capabilities and contextual understanding. These features are not merely additive but fundamentally reshape how information is accessed and utilized by LLMs.
    Dynamic Knowledge Graph Construction
    One of the most significant advancements of Graph RAG is its ability to construct a knowledge graph dynamically during the retrieval process.
    Traditional knowledge graphs are often pre-built and static, requiring extensive manual effort or complex ETL (Extract, Transform, Load) pipelines to maintain and update. In contrast, Graph RAG builds or expands the graph in real time based on the entities and relationships identified from the input query and initial retrieval results.
    This on-the-fly construction ensures that the knowledge graph is always relevant to the immediate context of the user's query, avoiding the overhead of managing a massive, all-encompassing graph. This dynamic nature allows the system to adapt to new information and evolving contexts without requiring constant re-indexing or graph reconstruction.
    For instance, if a query mentions a newly discovered scientific concept, Graph RAG can incorporate this into its temporary knowledge graph, linking it to existing related entities, thereby providing up-to-date and relevant information.
    Intelligent Entity Linking
    At the heart of dynamic graph construction lies intelligent entity linking.
    As information is processed, Graph RAG identifies key entities (e.g., people, organizations, locations, concepts) and establishes relationships between them. This goes beyond simple keyword matching; it involves understanding the semantic connections between different pieces of information.
    For example, if a document mentions "GPT-4" and another mentions "OpenAI," the system can link these entities through a "developed by" relationship. This linking process is crucial because it allows the RAG system to traverse the graph and retrieve not just the direct answer to a query, but also related information that provides richer context.
    This is particularly beneficial in domains where entities are highly interconnected, such as medical research, legal documents, or financial reports. By linking relevant entities, Graph RAG ensures a more comprehensive and interconnected retrieval, enhancing the depth and breadth of the information provided to the LLM.
    Contextual Decision-Making with Graph Traversal
    Unlike vector search, which retrieves information based on semantic similarity in an embedding space, Graph RAG leverages the explicit relationships within the knowledge graph for contextual decision-making.
    When a query is posed, the system doesn't just pull isolated documents; it performs graph traversals, following paths between nodes to identify the most relevant and contextually appropriate information.
    This means the system can answer complex, multi-hop questions that require connecting disparate pieces of information.
    For example, to answer "What are the main research areas of the lead scientist at DeepMind?", a traditional RAG might struggle to connect "DeepMind" to its "lead scientist" and then to their "research areas" if these pieces of information are in separate documents. Graph RAG, however, can navigate these relationships directly within the graph, ensuring that the retrieved information is not only accurate but also deeply contextualized within the broader knowledge network.
    This capability significantly improves the system's ability to handle nuanced queries and provide more coherent and logically structured responses.
    Confidence Score Utilization for Refined Retrieval
    To further optimize the retrieval process and prevent the inclusion of irrelevant or low-quality information, Graph RAG utilizes confidence scores derived from the knowledge graph.
    These scores can be based on various factors, such as the strength of relationships between entities, the recency of information, or the perceived reliability of the source. By assigning confidence scores, the framework can intelligently decide when and how much external knowledge to retrieve.
    This mechanism acts as a filter, helping to prioritize high-quality, relevant information while minimizing the addition of noise.
    For instance, if a particular relationship has a low confidence score, the system might choose not to expand retrieval along that path, thereby avoiding the introduction of potentially misleading or unverified data.
    This selective expansion ensures that the LLM receives a compact and highly relevant set of facts, improving both efficiency and response accuracy by maintaining a focused and pertinent knowledge graph for each query.
    How Graph RAG Works: A Step-by-Step Breakdown
    Understanding the theoretical underpinnings of Graph RAG is essential, but its true power lies in its practical implementation.
    This section will walk through the typical workflow of a Graph RAG system, illustrating each stage with conceptual code examples to provide a clearer picture of its operational mechanics.
    While the exact implementation may vary depending on the chosen graph database, LLM, and specific use case, the core principles remain consistent.
    Step 1: Query Analysis and Initial Entity Extraction
    The process begins when a user submits a query.
    The first step for the Graph RAG system is to analyze this query to identify key entities and potential relationships. This often involves Natural Language Processing (NLP) techniques such as Named Entity Recognition (NER) and dependency parsing.
    Conceptual Code Example (Python):
    import spacy from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.metrics.pairwise import cosine_similarity import networkx as nx # Load spaCy nlp = spacy.load("en_core_web_sm") # Step 1: Extract entities def extract_entities(query): doc = nlp(query) return [(ent.text.strip(), ent.label_) for ent in doc.ents] query = "Who is the CEO of Google and what is their net worth?" extracted_entities = extract_entities(query) print(f"🧠 Extracted Entities: {extracted_entities}" Step 2: Initial Retrieval and Candidate Document Identification
    Once entities are extracted, the system performs an initial retrieval from a vast corpus of documents.
    This can be done using traditional vector search (e.g., cosine similarity on embeddings) or keyword matching. The goal here is to identify a set of candidate documents that are potentially relevant to the query.
    Conceptual Code Example (Python - simplified vector search):
    # Step 2: Retrieve candidate documents corpus = [ "Sundar Pichai is the CEO of Google.", "Google is a multinational technology company.", "The net worth of many tech CEOs is in the billions.", "Larry Page and Sergey Brin founded Google." ] vectorizer = TfidfVectorizer() corpus_embeddings = vectorizer.fit_transform(corpus) def retrieve_candidate_documents(query, corpus, vectorizer, corpus_embeddings, top_k=2): query_embedding = vectorizer.transform([query]) similarities = cosine_similarity(query_embedding, corpus_embeddings).flatten() top_indices = similarities.argsort()[-top_k:][::-1] return [corpus[i] for i in top_indices] candidate_docs = retrieve_candidate_documents(query, corpus, vectorizer, corpus_embeddings) print(f"📄 Candidate Documents: {candidate_docs}") Step 3: Dynamic Knowledge Graph Construction and Augmentation
    This is the core of Graph RAG.
    The extracted entities from the query and the content of the candidate documents are used to dynamically construct or augment a knowledge graph. This involves identifying new entities and relationships within the text and adding them as nodes and edges to the graph. If a base knowledge graph already exists, this step augments it; otherwise, it builds a new graph from scratch for the current query context.
    Conceptual Code Example (Python - using NetworkX for graph representation):
    # Step 3: Build or augment graph def build_or_augment_graph(graph, entities, documents): for entity, entity_type in entities: graph.add_node(entity, type=entity_type) for doc in documents: doc_nlp = nlp(doc) person = None org = None for ent in doc_nlp.ents: if ent.label_ == "PERSON": person = ent.text.strip().strip(".") elif ent.label_ == "ORG": org = ent.text.strip().strip(".") if person and org and "CEO" in doc: graph.add_node(person, type="PERSON") graph.add_node(org, type="ORG") graph.add_edge(person, org, relation="CEO_of") return graph # Create and populate the graph knowledge_graph = nx.Graph() knowledge_graph = build_or_augment_graph(knowledge_graph, extracted_entities, candidate_docs) print("🧩 Graph Nodes:", knowledge_graph.nodes(data=True)) print("🔗 Graph Edges:", knowledge_graph.edges(data=True)) Step 4: Graph Traversal and Contextual Information Retrieval
    With the dynamic knowledge graph in place, the system performs graph traversals starting from the query entities. It explores the relationships (edges) and connected entities (nodes) to retrieve contextually relevant information.
    This step is where the "graph" in Graph RAG truly shines, allowing for multi-hop reasoning and the discovery of implicit connections.
    Conceptual Code Example (Python - graph traversal):
    # Step 4: Graph traversal def traverse_graph_for_context(graph, start_entity, depth=2): contextual_info = set() visited = set() queue = [(start_entity, 0)] while queue: current_node, current_depth = queue.pop(0) if current_node in visited or current_depth > depth: continue visited.add(current_node) contextual_info.add(current_node) for neighbor in graph.neighbors(current_node): edge_data = graph.get_edge_data(current_node, neighbor) if edge_data: relation = edge_data.get("relation", "unknown") contextual_info.add(f"{current_node} {relation} {neighbor}") queue.append((neighbor, current_depth + 1)) return list(contextual_info) context = traverse_graph_for_context(knowledge_graph, "Google") print(f"🔍 Contextual Information from Graph: {context}") Step 5: Confidence Score-Guided Expansion (Optional but Recommended)
    As mentioned in the features, confidence scores can be used to guide the graph traversal.
    This ensures that the expansion of retrieved information is controlled and avoids pulling in irrelevant or low-quality data. This can be integrated into Step 4 by assigning scores to edges or nodes and prioritizing high-scoring paths.
    Step 6: Information Synthesis and LLM Augmentation
    The retrieved contextual information from the graph, along with the original query and potentially the initial candidate documents, is then synthesized into a coherent prompt for the LLM.
    This enriched prompt provides the LLM with a much deeper and more structured understanding of the user's request.
    Conceptual Code Example (Python):
    def synthesize_prompt(query, contextual_info, candidate_docs): return "\n".join([ f"User Query: {query}", "Relevant Context from Knowledge Graph:", "\n".join(contextual_info), "Additional Information from Documents:", "\n".join(candidate_docs) ]) final_prompt = synthesize_prompt(query, context, candidate_docs) print(f"\n📝 Final Prompt for LLM:\n{final_prompt}") Step 7: LLM Response Generation
    Finally, the LLM processes the augmented prompt and generates a response.
    Because the prompt is rich with contextual and interconnected information, the LLM is better equipped to provide accurate, comprehensive, and coherent answers.
    Conceptual Code Example (Python - using a placeholder LLM call):
    # Step 7: Simulated LLM response def generate_llm_response(prompt): if "Sundar" in prompt and "CEO of Google" in prompt: return "Sundar Pichai is the CEO of Google. He oversees the company and has a significant net worth." return "I need more information to answer that accurately." llm_response = generate_llm_response(final_prompt) print(f"\n💬 LLM Response: {llm_response} import matplotlib.pyplot as plt plt.figure(figsize=(4, 3)) pos = nx.spring_layout(knowledge_graph) nx.draw(knowledge_graph, pos, with_labels=True, node_color='skyblue', node_size=2000, font_size=12, font_weight='bold') edge_labels = nx.get_edge_attributes(knowledge_graph, 'relation') nx.draw_networkx_edge_labels(knowledge_graph, pos, edge_labels=edge_labels) plt.title("Graph RAG: Knowledge Graph") plt.show() This step-by-step process, particularly the dynamic graph construction and traversal, allows Graph RAG to move beyond simple keyword or semantic similarity, enabling a more profound understanding of information and leading to superior response generation.
    The integration of graph structures provides a powerful mechanism for contextualizing information, which is a critical factor in achieving high-quality RAG outputs.
    Practical Applications and Use Cases of Graph RAG
    Graph RAG is not just a theoretical concept; its ability to understand and leverage relationships within data opens up a myriad of practical applications across various industries. By providing LLMs with a richer, more interconnected context, Graph RAG can significantly enhance performance in scenarios where traditional RAG might fall short. Here are some compelling use cases:
    1. Enhanced Enterprise Knowledge Management
    Large organizations often struggle with vast, disparate knowledge bases, including internal documents, reports, wikis, and customer support logs. Traditional search and RAG systems can retrieve individual documents, but they often fail to connect related information across different silos.
    Graph RAG can build a dynamic knowledge graph from these diverse sources, linking employees to projects, projects to documents, documents to concepts, and concepts to external regulations or industry standards. This allows for:
    Intelligent Q&A for Employees: Employees can ask complex questions like "What are the compliance requirements for Project X, and which team members are experts in those areas?" Graph RAG can traverse the graph to identify relevant compliance documents, link them to specific regulations, and then find the employees associated with those regulations or Project X.
    Automated Report Generation: By understanding the relationships between data points, Graph RAG can gather all necessary information for comprehensive reports, such as project summaries, risk assessments, or market analyses, significantly reducing manual effort.
    Onboarding and Training: New hires can quickly get up to speed by querying the knowledge base and receiving contextually rich answers that explain not just what something is, but also how it relates to other internal processes, tools, or teams.
    2. Advanced Legal and Regulatory Compliance
    The legal and regulatory domains are inherently complex, characterized by vast amounts of interconnected documents, precedents, and regulations. Understanding the relationships between different legal clauses, case laws, and regulatory frameworks is critical. Graph RAG can be a game-changer here:
    Contract Analysis: Lawyers can use Graph RAG to analyze contracts, identify key clauses, obligations, and risks, and link them to relevant legal precedents or regulatory acts. A query like "Show me all clauses in this contract related to data privacy and their implications under GDPR" can be answered comprehensively by traversing the graph of legal concepts.
    Regulatory Impact Assessment: When new regulations are introduced, Graph RAG can quickly identify all affected internal policies, business processes, and even specific projects, providing a holistic view of the compliance impact.
    Litigation Support: By mapping relationships between entities in case documents (e.g., parties, dates, events, claims, evidence), Graph RAG can help legal teams quickly identify connections, uncover hidden patterns, and build stronger arguments.
    3. Scientific Research and Drug Discovery
    Scientific literature is growing exponentially, making it challenging for researchers to keep up with new discoveries and their interconnections. Graph RAG can accelerate research by creating dynamic knowledge graphs from scientific papers, patents, and clinical trial data:
    Hypothesis Generation: Researchers can query the system about potential drug targets, disease pathways, or gene interactions. Graph RAG can connect information about compounds, proteins, diseases, and research findings to suggest novel hypotheses or identify gaps in current knowledge.
    Literature Review: Instead of sifting through thousands of papers, researchers can ask questions like "What are the known interactions between Protein A and Disease B, and which research groups are actively working on this?" The system can then provide a structured summary of relevant findings and researchers.
    Clinical Trial Analysis: Graph RAG can link patient data, treatment protocols, and outcomes to identify correlations and insights that might not be apparent through traditional statistical analysis, aiding in drug development and personalized medicine.
    4. Intelligent Customer Support and Chatbots
    While many chatbots exist, their effectiveness is often limited by their inability to handle complex, multi-turn conversations that require deep contextual understanding. Graph RAG can power next-generation customer support systems:
    Complex Query Resolution: Customers often ask questions that require combining information from multiple sources (e.g., product manuals, FAQs, past support tickets, user forums). A query like "My smart home device isn't connecting to Wi-Fi after the latest firmware update; what are the troubleshooting steps and known compatibility issues with my router model?" can be resolved by a Graph RAG-powered chatbot that understands the relationships between devices, firmware versions, router models, and troubleshooting procedures.
    Personalized Recommendations: By understanding a customer's past interactions, preferences, and product usage (represented in a graph), the system can provide highly personalized product recommendations or proactive support.
    Agent Assist: Customer service agents can receive real-time, contextually relevant information and suggestions from a Graph RAG system, significantly improving resolution times and customer satisfaction.
    These use cases highlight Graph RAG's potential to transform how we interact with information, moving beyond simple retrieval to true contextual understanding and intelligent reasoning. By focusing on the relationships within data, Graph RAG unlocks new levels of accuracy, efficiency, and insight in AI-powered applications.
    Conclusion
    Graph RAG represents a significant evolution in the field of Retrieval-Augmented Generation, moving beyond the limitations of traditional vector-based retrieval to harness the power of interconnected knowledge. By dynamically constructing and leveraging knowledge graphs, Graph RAG enables Large Language Models to access and synthesize information with unprecedented contextual depth and accuracy.
    This approach not only enhances the factual grounding of LLM responses but also unlocks the potential for more sophisticated reasoning, multi-hop question answering, and a deeper understanding of complex relationships within data.
    The practical applications of Graph RAG are vast and transformative, spanning enterprise knowledge management, legal and regulatory compliance, scientific research, and intelligent customer support. In each of these domains, the ability to navigate and understand the intricate web of information through a graph structure leads to more precise, comprehensive, and reliable AI-powered solutions. As data continues to grow in complexity and interconnectedness, Graph RAG offers a robust framework for building intelligent systems that can truly comprehend and utilize the rich tapestry of human knowledge.
    While the implementation of Graph RAG may involve overcoming challenges related to graph construction, entity extraction, and efficient traversal, the benefits in terms of enhanced LLM performance and the ability to tackle real-world problems with greater efficacy are undeniable.
    As research and development in this area continue, Graph RAG is poised to become an indispensable component in the architecture of advanced AI systems, paving the way for a future where AI can reason and respond with a level of intelligence that truly mirrors human understanding.
    Frequently Asked Questions
    1. What is the primary advantage of Graph RAG over traditional RAG?
    The primary advantage of Graph RAG is its ability to understand and leverage the relationships between entities and concepts within a knowledge graph. Unlike traditional RAG, which often relies on semantic similarity in vector space, Graph RAG can perform multi-hop reasoning and retrieve contextually rich information by traversing explicit connections, leading to more accurate and comprehensive responses.
    2. How does Graph RAG handle new information or evolving knowledge?
    Graph RAG employs dynamic knowledge graph construction. This means it can build or augment the knowledge graph in real-time based on the entities identified in the user query and retrieved documents. This on-the-fly capability allows the system to adapt to new information and evolving contexts without requiring constant re-indexing or manual graph updates.
    3. Is Graph RAG suitable for all types of data?
    Graph RAG is particularly effective for data where relationships between entities are crucial for understanding and answering queries. This includes structured, semi-structured, and unstructured text that can be transformed into a graph representation. While it can work with various data types, its benefits are most pronounced in domains rich with interconnected information, such as legal documents, scientific literature, or enterprise knowledge bases.
    4. What are the main components required to build a Graph RAG system?
    Key components typically include:
    **LLM (Large Language Model): **For generating responses.
    Graph Database (or Graph Representation Library): To store and manage the knowledge graph (e.g., Neo4j, Amazon Neptune, NetworkX). Information Extraction Module: For Named Entity Recognition (NER) and Relation Extraction (RE) to populate the graph.
    Retrieval Module: To perform initial document retrieval and then graph traversal. Prompt Engineering Module: To synthesize the retrieved graph context into a coherent prompt for the LLM. 5. What are the potential challenges in implementing Graph RAG?
    Challenges can include:
    Complexity of Graph Construction: Accurately extracting entities and relations from unstructured text can be challenging. Scalability: Managing and traversing very large knowledge graphs efficiently can be computationally intensive. Data Quality: The quality of the generated graph heavily depends on the quality of the input data and the extraction models. Integration: Seamlessly integrating various components (LLM, graph database, NLP tools) can require significant engineering effort. 6. Can Graph RAG be combined with other RAG techniques?
    Yes, Graph RAG can be combined with other RAG techniques. For instance, initial retrieval can still leverage vector search to narrow down the relevant document set, and then Graph RAG can be applied to these candidate documents to build a more precise contextual graph. This hybrid approach can offer the best of both worlds: the broad coverage of vector search and the deep contextual understanding of graph-based retrieval.
    7. How does confidence scoring work in Graph RAG?
    Confidence scoring in Graph RAG involves assigning scores to nodes and edges within the dynamically constructed knowledge graph. These scores can reflect the strength of a relationship, the recency of information, or the reliability of its source. The system uses these scores to prioritize paths during graph traversal, ensuring that only the most relevant and high-quality information is retrieved and used to augment the LLM prompt, thereby minimizing irrelevant additions.
    References
    Graph RAG: Dynamic Knowledge Graph Construction for Enhanced Retrieval Note: This is a conceptual article based on the principles of Graph RAG. Specific research papers on "Graph RAG" as a unified concept are emerging, but the underlying ideas draw from knowledge graphs, RAG, and dynamic graph construction.
    Original Jupyter Notebook (for code examples and base content)
    Retrieval-Augmented Generation (RAG)
    Lewis, P., et al. (2020). Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. arXiv preprint arXiv:2005.11401. https://arxiv.org/abs/2005.11401 Knowledge Graphs
    Ehrlinger, L., & Wöß, W. (2016). Knowledge Graphs: An Introduction to Their Creation and Usage. In Semantic Web Challenges (pp. 1-17). Springer, Cham. https://link.springer.com/chapter/10.1007/978-3-319-38930-1_1 Named Entity Recognition (NER) and Relation Extraction (RE)
    Nadeau, D., & Sekine, S. (2007). A survey of named entity recognition and classification. Lingvisticae Investigationes, 30(1), 3-26.
    https://www.researchgate.net/publication/220050800_A_survey_of_named_entity_recognition_and_classification NetworkX (Python Library for Graph Manipulation)
    https://networkx.org/ spaCy (Python Library for NLP)
    https://spacy.io/ scikit-learn (Python Library for Machine Learning)
    https://scikit-learn.org/
  3. by: Abhishek Prakash
    Thu, 13 Nov 2025 04:29:03 GMT

    Here is the news. It's FOSS News (news.itsfoss.com) doesn't exist anymore, at least not as a separate entity. All news articles are now located under the main website: https://itsfoss.com/news/
    I merged the two portals into one. Now, you just have to log into one portal to enjoy your membership benefits. I hope it simplifies things for you, specially if you are a Plus member.
    Let's see what else you get in this edition of FOSS Weekly:
    A new ODF document standard release. Open source alternative to Help Scout. YouTube clamping down on tech YouTubers. Fixing thumbnail issues in Fedora 43 Ubuntu's Rust transition hitting yet another hurdle. And other Linux news, tips, and, of course, memes! This edition of FOSS Weekly is supported by Internxt. SPONSORED You cannot ignore the importance of cloud storage these days, especially when it is encrypted. Internxt is offering 1 TB of lifetime, encrypted cloud storage for a single payment. Make it part of your 3-2-1 backup strategy and use it for dumping data. At least, that's what I use it for.
    Get Internxt Lifetime Cloud Storage 📰 Linux and Open Source News
    A new Rust-related problem has cropped up in the land of Ubuntu. ODF 1.4 is here as the next evolution for the open document standard. You can now play classic D3D7 games on Linux with this new project. YouTube recently deleted some Windows 11 bypass tutorials with some absurd claims. Kaspersky antivirus software is now available for Linux users. Personally, I don't use any such software on Linux. Big Tech being Big Tech. A creator claimed that his videos about bypassing Windows 11's mandatory online account were removed by YouTube.
    YouTube Goes Bonkers, Removes Windows 11 Bypass Tutorials, Claims ‘Risk of Physical Harm’When will these Big Tech platforms learn?It's FOSSSourav Rudra🧠 What We’re Thinking About
    Could GNOME Office be a thing? Roland has some convincing points:
    It’s Time to Bring Back GNOME Office (Hope You Remember It)Those who used GNOME 2 in the 2000’s would remember the now forgotten GNOME Office. I think it’s time to revive that project.It's FOSSRoland TaylorOn a side note, I found out that Flathub is ranking on Google for NSFW keywords.
    What a Shame! FlatHub is Ranking on Google for Po*nHub DownloadsAnd it’s not Google’s fault this time.It's FOSSAbhishek Prakash🧮 Linux Tips, Tutorials, and Learnings
    You can fix that annoying issue of GNOME Files not showing image thumbnails on Fedora, btw.
    Fixing Image Thumbnails Not Showing Up in GNOME Files on Fedora LinuxTiny problem but not good for the image of Fedora Linux, pun intended.It's FOSSAbhishek PrakashTheena suggests some ways to reclaim your data privacy. Switching to a private email service like Proton is one of the recommendations.
    If you are absolutely new to the Linux commands, we have a hands-on series to help you out.
    Linux Command Tutorials for Absolute BeginnersNever used Linux commands before? No worries. This tutorial series is for absolute beginners to the Linux terminal.It's FOSS👷 AI, Homelab and Hardware Corner
    Ownership of digital content is an illusion, until you take matters into your own hands. Our self-hosting starter pack should be a good starting point.
    The Self-Hosting Starter Pack: 5 Simple Tools I Recommend To Get Started With Your HomelabSelf-hosting isn’t rocket science—if I can do it, so can you!It's FOSSTheena Kumaragurunathan🛍️ Linux eBook bundle
    This curated library (partner link) of courses includes Supercomputers for Linux SysAdmins, CompTIA Linux+ Certification Companion, Using and Administering Linux: Volumes 1–2, and more. Plus, your purchase supports the Room to Read initiative!
    Explore the Humble offer here✨ Project Highlights
    Don't let its name fool you. Calcurse is a powerhouse of a tool that can be your go-to for any calendar management needs (like a boon, almost).
    Command Your Calendar: Inside the Minimalist Linux Productivity Tool CalcurseA classic way to stay organized in the Linux terminal with a classic CLI tool.It's FOSSRoland TaylorHelp Scout is known for abrupt pricing changes; why not switch to a platform that actually cares?
    Tired of Help Scout Pulling the Rug from Under You? Try This Free, Open Source AlternativeDiscover how FreeScout lets you run your own help desk without vendor lock-in or surprise price hikes.It's FOSSSourav Rudra📽️ Videos I Am Creating for You
    The latest video shows my recommendations for Kitty terminal configuration changes.
    Subscribe to It's FOSS YouTube Channel Linux is the most used operating system in the world. but on servers. Linux on desktop is often ignored. That's why It's FOSS made it a mission to write helpful tutorials and guides to help use Linux on their personal computer.
    We do it all for free. No venture capitalist funds us. But you know who does? Readers like you. Yes, we are an independent, reader supported publication helping Linux users worldwide with timely news coverage, in-depth guides and tutorials.
    If you believe in our work, please support us by getting a Plus membership. It costs just $3 a month or $99 for a lifetime subscription.
    Join It's FOSS Plus 💡 Quick Handy Tip
    In the Konsole terminal emulator, you can use the right-click context menu to open any folder with a specific tool. For example, if you are inside a directory, right-click and select the "Open Folder With" option.
    From the list, select an application. So, for instance, if Dolphin is selected, the location will be opened in the file manager. If Kate is selected, that location is opened in the editor.
    Other than that, if you enable the "Underline Files" option in Configure Konsole →Profiles → Edit Profile → Mouse → Miscellaneous, you can even right-click and open files in GUI tools right from the terminal.
    🎋 Fun in the FOSSverse
    Can you get all the answers to this Linux distro logo quiz?
    Guess the Distro from its LogoThere is a logo and four distro names. Guess which one it belongs to. It’s that simple.It's FOSSAbhishek Prakash🤣 Meme of the Week: Such words can hurt the soul, you know. 😵
    🗓️ Tech Trivia: On November 9, 2004, Mozilla Firefox 1.0 was released, introducing a faster, safer web-browsing experience with features like tabbed browsing and popup blocking, marking a major challenge to Microsoft’s Internet Explorer dominance.
    🧑‍🤝‍🧑 From the Community: One of the developers of antiX Linux has announced that the first beta release of antiX 25 is now live!
    antiX 25 Beta 1 Available for Public TestingantiX-25-full-beta1available for public testing November 5, 2025 by anticapitalista Here is the first beta iso of antiX-25 (64bit). Bullet point notes for now. based on Debian 13 ‘trixie’ 4 modern systemd-free init systems – runit (default), s6-rc, s6-66 and dinit new default look usual ‘antiX magic’ you should be able to boot live in the non-default init and it should then become the default after install. Please note that user intervention will be required more than previous versions o…It's FOSS CommunityProwlerGr❤️ With love
    Please share it with your Linux-using friends and encourage them to subscribe (hint: it's here).
    Share the articles in Linux Subreddits and community forums.
    Follow us on Google News and stay updated in your News feed.
    Opt for It's FOSS Plus membership and support us 🙏
    Enjoy FOSS 😄
  4. by: Sourav Rudra
    Wed, 12 Nov 2025 17:12:36 GMT

    The OpenSearch Software Foundation is a vendor-neutral organization under the Linux Foundation that hosts the OpenSearch Project. It recently appointed a new Executive Director, and the project itself has already seen over 1 billion software downloads since launch.
    If you didn't know, OpenSearch focuses on search, analytics, observability, and vector database capabilities.
    What's Happening: During this year's KubeCon + CloudNativeCon North America conference, the foundation announced that IBM has joined as a Premier Member. This move comes at a time when enterprises are increasingly adopting retrieval-augmented generation (RAG) for AI applications.
    The membership costs $150,000 annually, where IBM joins existing Premier Members, including AWS, SAP, and Uber.
    IBM currently uses OpenSearch in production through DataStax, its subsidiary. The company integrated JVector with OpenSearch for high-performance vector search at billion-vector scale.
    During the announcement, Ed Anuff, the VP of Data and AI Platforms Strategy at IBM, added that:
    What to Expect: IBM will contribute enterprise-grade enhancements to OpenSearch's security and observability features. The company plans to share high-availability patterns tested through IBM Cloud deployments.
    The focus areas include vector search performance improvements and multimodal document ingestion. IBM also aims to advance the developer experience for building AI agents.
    Plus, the company is on track to announce a new open source project featuring OpenSearch at the OpenRAG Summit on November 13.
    Reflecting on the partnership's significance, Bianca Lewis remarked:
    Suggested Read 📖
    OpenSearch Foundation Strengthens Leadership with New Executive DirectorBianca Lewis becomes executive director of the OpenSearch Foundation.It's FOSSSourav Rudra
  5. by: Sourav Rudra
    Wed, 12 Nov 2025 15:11:51 GMT

    The Linux ecosystem is facing increasing pressure from threat actors, who are getting more clever day-by-day, threatening critical infrastructure worldwide. Servers powering essential services, industrial control systems, and enterprise networks all rely on Linux, and these attackers know it.
    What was once considered a relatively safe ecosystem is now a lucrative target. 🥲
    This brings us to Kaspersky, the Russian cybersecurity firm with a reputation. The company was banned from selling its antivirus software and cybersecurity products in the U.S. back in July 2024.
    But for users outside the U.S., Kaspersky just announced something interesting. They are bringing antivirus protection to home Linux users. Though, it remains to be seen, whether this addresses genuine security needs or if it's just security theater for worried penguins.
    🚧This piece of software is not FOSS. We covered it because it is available for Linux!Kaspersky for Linux: What Does it Offer?
    Kaspersky has expanded its consumer security lineup to include Linux. This marks the first time their home user products officially support the platform. The company adapted their existing business security solution for home users. Support covers major 64-bit distributions, including Debian, Ubuntu, Fedora, and RED OS.
    Depending on the plan you opt for, the feature set includes real-time monitoring of files, folders, and applications to detect and eliminate malware. Behavioral analysis detects malware on the device for proactive defense.
    Removable media like USB drives and external hard drives get scanned automatically upon connection. This prevents the spread of viruses across devices and networks.
    Anti-phishing alerts users when attempting to follow phishing links in emails and on websites. Online payment protection verifies the security of bank websites and online stores before financial transactions.
    Anti-cryptojacking prevents unauthorized crypto mining on devices to protect system performance, and AI-powered scanning blocks infected files, folders, and applications upon detecting viruses, ransomware trojans, password stealers, and other malware.
    Though, there is one important thing to consider: Kaspersky for Linux isn't GDPR-ready, so keep this in mind if you are an EU-based user concerned about data protection compliance.
    Get Kaspersky for Linux
    An active paid subscription is required to download and use Kaspersky for Linux. A 30-day free trial is available for users who want to test before committing to a paid plan. Both DEB and RPM packages are provided for easy installation.
    The official installation guide contains detailed setup instructions.
    Kaspersky for LinuxVia: Phoronix
  6. by: Sourav Rudra
    Wed, 12 Nov 2025 13:29:24 GMT

    Ubuntu's move to Rust-based system utilities has hit some bumps. Earlier, a bug in the Rust-based date command broke automatic updates. The command returned current time instead of file modification timestamps, causing Ubuntu 25.10 systems to stop automatically checking for software updates.
    That issue was quickly fixed, but now, two security vulnerabilities have been found in sudo-rs.
    Better Now than Later
    The first vulnerability involves password exposure during timeouts. When users type a password but don't press enter, the timeout causes those keystrokes to replay onto the console. This could reveal partial passwords in shell history or on screen.
    The second issue affects timestamp authentication. When Defaults targetpw or Defaults rootpw options are enabled, sudo-rs incorrectly recorded the wrong user ID in timestamps. This allowed bypassing authentication by reusing cached credentials even when policy required a different password.
    Patches for both issues have been released in sudo-rs 0.2.10. Ubuntu is set to push the fixes through a Stable Release Update (SRU).
    These bugs being caught in Ubuntu 25.10 is actually a good sign. The interim release serves as a testing ground before Ubuntu 26.04 LTS arrives in April 2026. Finding critical security flaws now allows developers ample time to address them.
    Here's the Fix!
    At the time of writing, the updated sudo-rs package had not yet arrived in the Ubuntu 25.10 repositories. But it should be available soon.
    Once the update is live, you can get the fix using the graphical Software Updater tool by launching it from your application menu and installing any available security updates.
    sudo-rs' upgrade process on Ubuntu 25.10.
    Alternatively, you can use the terminal. Run these commands one after the other to get the patch:
    sudo-rs updatesudo-rs upgradePS: Using sudo instead of sudo-rs also works the same.
    Via: Phoronix
    Suggested Read 📖
    sudo vs sudo-rs: What You Need to Knowsudo-rs is poised to take over. Here’s what you should know about sudo-rs as a sudo user.It's FOSSAbhishek Prakash
  7. by: Ani
    Wed, 12 Nov 2025 11:14:36 +0000

    The only constant in life is change.

    About me
    I’m Chandni, currently working as a UX Designer at Neomore, where I create user experiences for SAP applications. My work begins with understanding workflows through user research and interviews. From there, I create wireframes, prototypes, and user flows, collaborating closely with consultants, developers, and stakeholders to ensure that every design is both technically feasible and genuinely user-friendly.
    Starting my career in the IT field

    My career began as a software developer, which gave me a strong foundation in how digital products are built. Over time, I realized that what truly fascinated me was the human side of technology, the way people interact with systems, and how design can make that interaction possible. Being naturally people-oriented, I transitioned into UX design to focus on understanding users’ needs and creating experiences that make complex systems user-friendly and enjoyable. 

    Chandni Sharma, Consultant, UX & Application Innovation, Neomore

    My Background

    I earned my Bachelor of Technology degree in India. Later, I moved to Finland when my husband was relocated here, and I quickly fell in love with the country. What started as a visit soon became a long-term decision to build both my life and career here.  
    When I first arrived, it was a challenging time to find a job. It was right after Nokia’s major economic decline, and many experienced professionals had entered the job market. Each time I applied, I often got a reply that the position had been filled by someone who had just left Nokia. Being young and less experienced, it was difficult to compete.  
    Instead of giving up, I decided to focus on learning. I pursued a bachelor’s degree in user experience at Haaga-Helia and later completed my master’s in computer science at Aalto University, specializing in Service Design and UX. These studies not only deepened my technical knowledge but also boosted my confidence. They helped me successfully transition from software development to becoming a UX designer and researcher, an expert in creating meaningful user experiences.
    My path to Neomore

    When I was studying, I had heard of SAP but didn’t really understand what the field was about. Later, while searching for jobs, I came across openings for UX roles in SAP and became curious to know how user experience fits into SAP. I decided to apply, and during the interviews, the team explained the work in detail. I realized it would be both a challenging and rewarding learning journey. That’s how I joined Neomore. I had some prior experience from another company, and now, this December, I’ll be completing three years at Neomore.
    Working at Neomore
    What truly motivates me to go to the office every day is the people and the culture at Neomore. The supportive and inspiring environment they’ve built makes a huge difference, which keeps me motivated no matter how challenging a project or task might be. I have to admit, it’s the people and the culture that make Neomore such a great place to work.
    Enjoying my work

    The best part of my job is the constant learning that comes with working in the SAP industry. Every day, I gain new insights into different processes and how our clients operate, areas I never imagined I’d explore. For example, I’ve learned how manufacturing works, what kinds of machines are used in woodworking, and the challenges people face in their daily routines. Understanding these real-world contexts and discovering how technology can help them is both motivating and exciting. This continuous learning keeps me energized, no matter how challenging the work gets.
    The necessary skills in the IT field
    Problem-solving and curiosity are some of the most important skills to have. In my work, curiosity drives me to ask questions and explore different perspectives. When I listen to people, my curious nature helps me go deeper into their stories and uncover hidden insights. I’m not afraid to ask questions—though I’m always mindful of how and when to ask them. This openness allows me to gather valuable answers, identify the real problems, and map out effective solutions. Ultimately, curiosity and the courage to ask are key to meaningful problem-solving in any field.
    Other important skills to have are good communication skills, being empathetic towards users, and having knowledge of software development. Being skilled in technology, I can bridge the technology and human needs into something that can really make a difference.
    Overcoming challenges
    To overcome challenges, I’ve learned that having patience with oneself is essential. You need to give time to keep yourself updated on technologies so that you can level up with developers, stakeholders, and users.
    To keep yourself up-to-date, you must embrace continuous learning to help yourself bridge the gaps, and I’m grateful that Neomore strongly supports professional growth. They regularly encourage us to learn and discuss ways to develop further. For me, patience and lifelong learning are the keys to overcoming challenges in this field.
    The key to solving problems
    Whenever I get stuck on a problem and cannot find a solution, I pause for a moment instead of forcing myself to continue. At the office, we have a pool table, so I often take a short break to play a quick game, sometimes alone, sometimes with a colleague. That brief change of focus helps clear my mind. It is a simple routine, but it really helps me get back into the right state of mind to solve the problems effectively.
    Sources of energy
    I get a lot of energy by being surrounded by people, either friends or family. This has been a good source of energy for me. After becoming a mother, my children became my greatest source of energy. Playing with my kids, listening to their stories, and doing whatever they want to do brings my mind to balance and provides me with a lot of energy to continue and thrive in any situation I am in.
    Always Start with Why
    Start with Why, written by Simon Sinek, has had a profound impact on my work and, more broadly, on my life. For example, when I talk to a client, I listen to their needs and always ask why they want to have what they want. This allows me to go deeper into their needs, and I get a clearer idea of what they are requesting, which helps me to help them get their solutions.
    About the impact of AI
    I strongly believe that AI is a powerful tool designed to help us. There’s a saying that the only constant in life is change, and AI is a part of that change. Instead of fearing it, we should learn, understand, and use it to our advantage. Every technology has both positive and negative sides, and AI is no different. The key is to understand both aspects and use them responsibly. Personally, I actively explore and learn from different AI tools, finding ways they can support my work and growth. While some worry that AI might take jobs or have negative effects, I see it as an opportunity to evolve and work smarter.

    Note: Between the time of writing this blog post and publishing it, Chandni Sharma’s employment at Neomore ended. Neomore, Chandni and Women in Tech decided to publish this blog post regardless since Chandni will always be a role model to our communities.
    The post Role Model Blog: Chandni Sharma, Neomore first appeared on Women in Tech Finland.
  8. by: Theena Kumaragurunathan
    Wed, 12 Nov 2025 07:21:41 GMT

    In my last column, Ownership is an illusion, unless you self-host, I encouraged readers to go down the self-hosting path. My thesis was simple: ownership of digital assets (movies, music, games, books, software) is an illusion, and that the only way to move away from this make-believe was to embrace self-hosting.
    For people like me, non-programmer types, this is easier said than done: Free and Open Source (FOSS) can seem intimidating because often (not always) FOSS asks you to embrace granular control over convenience and ease-of-use.
    The author's server, a repurposed 14 year old ThinkPad, ©Theena Kumaragurunathan, 2025When non-tech people see my server (an old ThinkPad T420) nestled in my book-shelf, running ‘bpytop’ of all things, they assume that I am engaged in some hackery: ‘What is this Matrix shit?’, a friend once wondered. When I told him that it was nothing more than a file-server for my media (movies, music and books), and then showed him my Jellyfin instance running inside my browser, I could see he was having a lightbulb moment:
    ‘Can you do this for me?’
    Sure, I told him, but I offered him a better choice: ‘I’ll show you what you need to know in order to do this yourself, and then we will create a media server for you together.’ Give a man a fish and you feed him for a day. Teach a man to fish and you feed him for a lifetime, right?
    That journey with my friend took us from basics Linux commands to installation of Plex/Jellyfin, which is well beyond the scope of this article (let us know if that is something you are interested in, non-techie/programmer readers). Instead in this column, I will offer an abdridged version.
    Ask yourself why you need this
    I had a clear motivation to go down this path during the pandemic: I wanted to backup my media collection of music and run it off Plex Server. My friend wanted to self-host his movies so that he didn’t have to wade through his hard-disks when selecting a movie to watch.
    What is your motivation?
    What you need to get started on the homelab bandwagon
    An old computer or laptop. This needs to be in working order. Mine is an old ThinkPad T420 (which is 14 years old, and I am its 3rd owner). Anything from the last decade and a half ought to do. You can also get a Raspberry Pi. I would also prefer an older machine with an ethernet port; connection stability is better when your server has a wired connection to your network in my experience.
    Pick an operating system: I chose Debian server. You can host many of the applications listed below on a Windows install too, but your mileage may vary. If you want an even easier way, try the YunoHost Linux distro.
    The author's self-hosting stack, ©Theena Kumaragurunathan, 2025Start your homelab by self-hosting these software
    You don't have to deploy all the recommendations. Think about which one would fit your requirements the most. Select it and then deploy it. Once that is successful, try next. One project at a time.
    📋I am not going to include installation and set-up instructions. Those things may differ based on the choice of your operating system as well as hardware. These are just the recommendations to put you on the right track.Jellyfin: Your own Netflix
    Enjoying local movies on TV with JellyfinJellyfin is your home theater. It organizes movies and shows, fetches artwork, and streams to TV, browser, and phone. I chose Jellyfin media server software because setup is simple. On Debian or Ubuntu, you can use the official guide, or run it with Docker and point it at your media folders. It has no subscriptions and no tracking.
    💡Keep your server on wired ethernet for stable playback, and enable hardware transcoding only if your CPU or GPU supports it.Kavita: Your own Kindle library
    The author's instance of Kavita running on his local server, ©Theena Kumaragurunathan, 2025 Kavita is a self hosted library for books, PDFs, comics, and manga. It has a fast reader, rich metadata, OPDS, and good user management. I use it to keep my EPUBs and essays in one place with clean reading progress across devices.
    💡Sort files into clear folders, let Kavita watch those folders, and enable OPDS if you read on third party apps.Nextcloud: Your own Google drive
    Nextcloud is your personal file cloud. Sync your files, share links, and extend it with Notes, Calendar, and Contacts. It feels like a private Dropbox that runs on your hardware. The server has regular releases and clear upgrade docs. If you are new, use the web installer or Docker and start with Files before adding apps.
    💡Keep it simple. Install Files first, set up the desktop client, and only add one or two apps after you are comfortable.Immich: Your own Google Photos
    Immich is a private photo and video backup with mobile apps on Android and iOS. It does face recognition, search, albums, and multi user support. It is fast and designed for large libraries. Installation is straightforward with Docker Compose. Begin with the official site, then the server and apps.
    💡Turn on automatic mobile backup, keep originals on the server, and use albums for curation.Navidrome: Your own Spotify
    Navidrome turns your music collection into a streaming service. It indexes quickly, supports Subsonic clients, and runs well on modest hardware. You can use a single binary or Docker and attach your music folder.
    💡Install ffmpeg for transcoding, clean your tags for better library browsing, and test a few clients until one fits your flow.Putting It Together
    A practical starter map looks like this. Jellyfin for movies and shows. Kavita for books and PDFs. Nextcloud for files and sharing. Immich for your photos. Navidrome for music. Run all five on Debian server or YunoHost or on Docker if you prefer containers. Keep your server on wired Ethernet. Back up the data folders in your home network.
    Start with one service, get comfortable, then add another. The point is not perfection. It is owning your library and making it available to the people you care about, without asking permission from a platform that can lock you out at a whim.
    Enjoy your home lab 🏠🥼
  9. by: Sourav Rudra
    Tue, 11 Nov 2025 17:23:36 GMT

    D7VK is a new Vulkan-based translation layer for Direct3D 7. It relies on DXVK’s Direct3D 9 backend and works with Wine on Linux. The project is open source and actively maintained.
    The developer behind it is WinterSnowfall, who has also worked on D8VK between 2023 and 2024. That project has since been merged into the larger DXVK project that's extensively used by Linux users.
    You have to understand that D7VK is not meant to run every Direct3D 7 game. Titles that mix D3D7 with older DirectDraw or GDI calls may fail to launch or show graphical glitches. So, compatibility is experimental and limited.
    It works by translating Direct3D 7 calls to Direct3D 9 through DXVK, allowing Vulkan-based 3D application rendering on Linux. Sadly, there is no official list of supported games yet.
    Some games work well, others have issues. Missing textures, crashes, and black screens are common. The issues page on the project's GitHub repo shows which games are behaving poorly. It is a good way to see what currently works.
    📋PCGamingWiki's list of Direct3D 2-7 games is also a handy resource to have if you want to test a specific Direct3D 7 game.What’s nice is how the developer sets expectations right from the start. They are upfront about the experimental nature of the project. This clarity makes it easier to test games without getting disappointed.
    For fans of late 90s and early 2000s games, D7VK could be handy. It won’t fix everything, but it opens the door to running older Direct3D 7 games on Linux.
    Want to Check it Out?
    The D7VK GitHub repository has the source code. You can manually compile it and place it in your Wine prefix directory to try it out. D7VK supports a HUD overlay and frame rate limiting through DXVK.
    These features will help you track performance and debug graphical issues.
    D7VK (GitHub)Suggested Read 📖
    Is Linux Ready For Mainstream Gaming In 2025?Linux is quietly gaining ground on Windows in the gaming space. But how well does it actually perform? Here’s what I experienced.It's FOSSSourav Rudra
  10. by: Umair Khurshid
    Tue, 11 Nov 2025 20:03:47 +0530

    Networking problems rarely announce themselves clearly. A deployment fails, a pod cannot reach its database, or a service responds intermittently. The logs look clean, yet something feels wrong. Most engineers eventually learn one painful truth: when everything else seems fine, it is usually the network.
    From misrouted traffic to invisible firewalls, let me walk you through the most frequent networking issues that DevOps engineers encounter in Linux environments. I also explain how to investigate, diagnose, and fix each class of problem using real commands and reasoning.
    All this comes from the experience I have gained after spending years. The same experience also yielded this Linux networking microcourse that you should definitely check out.
    Linux Networking at ScaleMaster advanced networking on Linux — from policy routing to encrypted overlays.Linux HandbookUmair KhurshidIt’s Almost Always the Network
    When an application behaves unpredictably, the first instinct is to look at the code. Developers dig through logs, restart containers, or roll back deployments. In many cases, the application is not the culprit and it's the network. 
    Early in my career, I used to dread these moments as application logs would show nothing but retries and timeouts. The developers would swear nothing changed and the operations team would swear they touched nothing. Yet packets were vanishing into the void and that is how I began to take networking seriously not because I wanted to, but because I had to.
    A good troubleshooting approach begins by proving that connectivity works at every layer. Start simple:
    ping -c 4 8.8.8.8 ping -c 4 example.com If the first command succeeds but the second fails, DNS is the culprit. If both fail, it is a routing or firewall issue. This baseline test should always come before looking into application-level logs.
    Then, verify whether the local host can reach its gateway and whether packets are returning:
    ip route show traceroute 8.8.8.8The Routing Rabbit Hole
    Routing problems are deceptively subtle as traffic flows one way but not the other, or only some destinations are reachable. The root cause often hides in Linux’s routing tables or in policies added by container frameworks.
    Start by displaying the active routes:
    ip routeThis shows the kernel’s routing decisions. For more detailed analysis, especially in multi-interface or container setups, check which route a particular destination would take:
    ip route get 1.1.1.1If a host has multiple network interfaces or is part of a VPN or overlay, verify that the correct table is being used. Linux supports multiple routing tables, and policy routing determines which one applies. Check the rules:
    ip rule showMisconfigured rules can cause asymmetric routing, where packets leave through one interface but return on another. Firewalls often drop these replies because they appear invalid. One reliable fix is to assign separate routing tables for each interface and use ip rule add with from or fwmark selectors to control the path.
    For example, to route traffic from 192.168.10.0/24 through a specific gateway:
    ip route add default via 192.168.10.1 dev eth1 table 10 ip rule add from 192.168.10.0/24 table 10Always check for reverse path filtering:
    cat /etc/resolv.conf Set it to 2 (loose mode) on multi-homed hosts to prevent dropped packets due to asymmetric routes.
    Routing issues rarely announce themselves clearly. The key is to map how packets should travel, then prove it with ip route get, traceroute, and tcpdump.
    DNS: The Eternal Suspect
    No other component gets blamed as frequently or incorrectly as DNS. Even the recent AWS outage that took down half of the internet was reportedly caused by DNS.
    When an application cannot reach its dependency, the first guess is always “maybe DNS is broken.” Sometimes it is, but often the problem is caching, misconfiguration, or unexpected resolution order.
    Start by checking the configured resolvers:
    cat /etc/resolv.conf Most distros these use systemd-resolved, the file may point to a stub resolver at 127.0.0.53. To see the active DNS servers:
    resolvectl statusIf resolution is inconsistent between services, the problem may be namespace isolation. Containers often have their own /etc/resolv.conf, copied at startup. If the host’s DNS changes later, containers keep using outdated resolvers.
    Test resolution directly:
    dig example.com dig @8.8.8.8 example.comCompare responses from the default resolver and a public one. If only the latter works, the issue lies in internal DNS or local caching.
    A subtle but common failure arises from nsswitch.conf. The order of resolution methods (files dns myhostname) determines whether /etc/hosts entries or mDNS override DNS queries. In container-heavy environments, this can lead to confusing name collisions.
    💡DNS problems are not always network failures, but they produce identical symptoms. That is why verifying DNS resolution early saves hours of debugging.Even when DNS works, it can still mislead you. I remember spending an hour debugging a connection issue that turned out to be caused by an unexpected IPv6 AAAA record. The application preferred IPv6 but the route to that subnet was broken. The fix was as simple as setting precedence ::ffff:0:0/96 100 in /etc/gai.conf.
    MTU and Fragmentation Headaches
    The Maximum Transmission Unit or MTU defines how large a packet can be before it needs fragmentation. When this number mismatches between interfaces, tunnels, or virtual networks, packets vanish without trace. You get intermittent timeouts, partial uploads, and mysterious hangs in SSH sessions.
    To check the MTU on an interface:
    ip link show eth0To test path MTU discovery, use ping with increasing packet sizes:
    ping -s 1472 8.8.8.8Regular ICMP echoes may succeed even when TCP traffic fails. To detect MTU issues, you need to force the “do not fragment” flag:
    ping -M do -s 1472 8.8.8.8If it fails, lower the size until it succeeds. The MTU equals payload plus 28 bytes (ICMP and IP headers).
    In virtualized or overlay environments (VXLAN, WireGuard, GRE, eBPF), encapsulation overhead reduces the effective MTU. For example, VXLAN adds 50 bytes. Setting MTU to 1450 instead of 1500 avoids fragmentation.
    Adjust interface MTU safely:
    ip link set dev eth0 mtu 1450Applications sensitive to latency often experience erratic behavior because of hidden fragmentation. Once MTU mismatches are corrected, those mysterious slowdowns vanish.
    In container environments, MTU mismatches become especially painful. Overlay networks such as Flannel or Calico encapsulate packets inside UDP tunnels, reducing available space. If the MTU is not adjusted inside the container, performance plummets. A single missing ip link set dev eth0 mtu 1450 can make a cluster look broken.
    Overlay Networks and Ghost Packets
    Modern clusters rely on overlays to connect containers across hosts. VXLAN, WireGuard, and similar technologies encapsulate traffic into tunnels, creating virtual networks. They are convenient but introduce new failure modes that look invisible to traditional tools.
    A common symptom is “ghost packets” which is traffic that appears to leave one node but never arrives at another. The tunnel endpoint logs nothing, yet connectivity fails.
    The first step is to confirm that the tunnel interfaces exist and are up:
    ip link show type vxlanCheck if the remote endpoint is reachable outside the tunnel:
    ping <remote_host_ip>If that fails, the problem is not the overlay but the underlay, the physical or cloud network below it.
    Next, verify that encapsulated traffic is not filtered. VXLAN uses UDP port 4789 by default, and WireGuard uses 51820. Ensure that firewalls on both ends allow those ports.
    To inspect whether encapsulation is functioning:
    tcpdump -i eth0 udp port 4789If packets appear here but not on the remote host, NAT or routing between the nodes is rewriting source addresses in a way that breaks return traffic.
    WireGuard adds its own layer of complexity. Its peers are identified by public keys, not IP addresses, so if the endpoint’s IP changes (for example, in cloud autoscaling), you must update its Endpoint in the configuration:
    wg set wg0 peer <public-key> endpoint <new-ip>:51820 Overlay debugging requires seeing both worlds at once, the logical (tunnel) and physical (underlay) networks. Always verify that encapsulated packets can travel freely and that MTU accommodates the overhead. Most ghost packets die because of either firewall drops or fragmentation within the tunnel.
    When Firewalls and Conntrack Betray You
    Firewalls are supposed to protect systems, but when they fail silently, they create some of the hardest problems to diagnose. Linux’s connection tracking layer (conntrack) manages the state of every connection for NAT and stateful inspection. When its table fills or rules conflict, packets disappear with no visible error.
    Start by checking the current number of tracked connections:
    cat /proc/sys/net/netfilter/nf_conntrack_count cat /proc/sys/net/netfilter/nf_conntrack_maxI have debugged a number of microservice cluster where outbound connections failed intermittently and the culprit is overloaded conntrack table. Each NAT-ed connection consumes an entry, and the table silently drops new connections once full. The solution to this issue is simply increasing the limit:
    sysctl -w net.netfilter.nf_conntrack_max=262144For persistent tuning, add it to /etc/sysctl.conf.
    State timeouts can also cause intermittent loss and long lived connections often expire in conntrack while still active on the application side. Adjust the TCP established timeout:
    sysctl -w net.netfilter.nf_conntrack_tcp_timeout_established=3600Firewalls configured with nftables or iptables can complicate debugging when NAT or DNAT rules are applied incorrectly. Always inspect the active NAT table:
    nft list table natMake sure destination NAT and source NAT are paired correctly because Asymmetric NAT produces connection resets or silence.
    In high-throughput environments, offloading some rules to nftables sets with maps improves performance and reduces conntrack pressure. This is one of the areas where modern Linux firewalls significantly outperform legacy setups.
    Conntrack issues are often invisible until you look directly into its state tables. Once you learn to monitor them, many “random” connectivity problems turn out to be predictable and fixable.
    Lessons I Wish I Learned Earlier
    Every engineer eventually learns that networking failures tend to follow recognizable patterns, and identifying those patterns early can save hours of unnecessary panic.
    1. Always check the local host first. Half of network incidents begin with something as simple as a down interface, a missing route, or an outdated /etc/resolv.conf.
    2. Validate one layer at a time. Use ping for reachability, dig for DNS, traceroute for routing, tcpdump for packet visibility, and nft list ruleset for firewalls and never skip steps.
    3. Document assumptions. When debugging, write down what you believe should happen before testing. Networking surprises often come from assumptions no one verified.
    4. Monitor the invisible. Connection tracking, queue lengths, and interface drops are invisible in standard metrics. Expose them to your monitoring system to catch silent failures early.
    5. Learn how Linux really routes. Most complex issues trace back to misunderstood routing tables, policy rules, or namespaces. Understanding these mechanisms transforms troubleshooting from guessing to knowing.
    Wrapping Up
    The more you troubleshoot Linux networking, the more you realize it is not about memorizing commands. It is about building mental models of how packets move, how policies influence paths, and how the kernel’s view of the network differs from yours.
    For DevOps engineers managing modern infrastructure, from bare metal to Kubernetes that understanding becomes essential. Once you have fixed enough DNS loops, routing asymmetries, and conntrack overflows, the next logical step is to study how Linux handles these problems at scale: multiple routing tables, virtual routing instances, nftables performance tuning, encrypted overlays, and traffic shaping.
    The Linux Networking at Scale course builds directly on these foundations. It goes deeper into policy routing, nftables, overlays, and QoS, the exact skills that turn network troubleshooting into design. I highly recommend checking it out.
    Linux Networking at ScaleMaster advanced networking on Linux — from policy routing to encrypted overlays.Linux HandbookUmair Khurshid
  11. by: Sourav Rudra
    Tue, 11 Nov 2025 14:22:16 GMT

    Having a reliable help desk solution is a must for any consumer-facing business in today's digital age. Whether you handle customer emails, support tickets, or live chat, a good help desk system keeps your communication organized and your customers happy.
    Sadly, many companies take advantage of this need. They push users into walled gardens where access to basic features can change on a whim and key tools get locked behind paywalls.
    Help Scout's pricing as of November 11, 2025.One such case has been of Help Scout, which switched to a more expensive pricing plan. After customer backlash, the company reverted to a revised plan that was slightly cheaper than the one that sparked the outrage.
    But, what if I told you there was an alternative that does not make you anxious about sudden pricing changes? Something that lets you build your own setup, keep your data close, and pay only for what you actually need.
    FreeScout Doesn't Lock You In
    FreeScout is an open source help desk and shared mailbox built with PHP and Laravel. It is licensed under AGPL 3.0, which means the code is freely available, and you can self-host it on your own server without having to pay any user-based costs.
    You only pay for hosting and optional paid modules that expand functionality. Modules cover integrations, push notifications, and specialized features. Everything else, from ticket handling to automation, works out of the box once you install FreeScout.
    Other than the usual help desk features like shared inboxes, agent collision detection, canned responses, and user management, FreeScout offers flexibility that few platforms can match.
    FreeScout goes a step further with self-hosting, custom domains, API access, and full database control. You decide how your data is stored, backed up, and secured. For organizations that care about privacy and sovereignty, this makes a real difference.
    It also supports mobile apps for Android and iOS. Push notifications require a paid server-side module, but once configured, your team can manage tickets directly from their phones with no extra cloud dependencies.
    If you want integrations, FreeScout connects with Slack, Telegram, and other services. There are modules for CRM tools, customer portals, and even AI-assisted responses (via Community modules).
    Some Things to Keep in Mind
    Running FreeScout does need some technical setup. You will manage hosting, updates, and backups. Adding advanced features like AI-powered replies or analytics will take extra configuration and can add costs over time.
    Depending on your setup, you may still rely on FreeScout modules or community support. That means moving away later could take planning, though you always keep your data since it lives on your own server.
    In contrast, Help Scout and Zendesk provide everything under a single roof. They handle hosting, maintenance, and scaling for you but limit backend customization and control. You use what they provide within their rules.
    Overall, what FreeScout offers beats any walled garden solution, especially for people running small businesses or larger teams that value data ownership and predictable costs over convenience that comes with lock-in.
    Want to Deploy It?
    You can try FreeScout in your browser using its live demo. If you would like hosting it yourself, the official installation guide covers every step for various kinds of setups.
    Plus, there are apps for both Android and iOS. However, in order to for them to work with your FreeScout instance, you must do some additional config work.
    FreeScout🚀Run your own instance of FreeScout effortlessly in the cloud with PikaPods! Start free with $5 welcome credit 😎If you are considering a move from another help desk like Help Scout or Zendesk, you should check out the official migration guide, and if you are interested in the source code, then you can visit the project's GitHub repository.
    Suggested Read 📖
    5 Signs Your Proprietary Workflow Is Stifling Your Creativity (And What You Can Do About It)If these signs feel familiar, your creativity may be stifled by proprietary constraints.It's FOSSTheena Kumaragurunathan
  12. by: Abhishek Prakash
    Tue, 11 Nov 2025 11:07:18 GMT

    Imagine that one of the most prestigious open source software websites starts showing up in top results for "pornhub downloader".
    This is actually happening with Flathub, the official web-based app store for Flatpak packages.
    Here's a demo I made while risking to spoil my relationship with Google:
    0:00 /0:20 1× And no, I was not particularly looking for a one-handy utility like this 👼
    I was using Ahrefs, a SEO tool used for monitoring web rankings, among other things. This is when I noticed that FlatHub was ranking for terms it should not have been.
    Flathub ranking for words it wouldn't wantNot just that, out of the top 10 ranked pages, at least 2 of them are NSFW tag pages.
    Top ranking pages are not something Flathub would be proud ofShady developer piggybacking on Flathub's reputation
    This would have been one innovative, fun way to make more people use open source software only the application that uses these tags is not open source software at all.
    There are actually three of these applications, all of which were created by the same developer, called Warlord Software. I am not going to link out to this website out of spite.
    Similar kinds of apps, from the same developerIf you visit the Flathub page of these applications, nothing seems extra ordinary, just a regular downloader app for Linux.
    Seems like a regular downloader app until it is notBut when you scroll down to the tag section, this is where you see the root cause of the problem. All three apps are using those NSFW tags.
    This is a deliberate act of exploiting the good reputation of Flathub to get more people to dowload these applications before getting them on the paid version. Yes, all these three apps have premium licenses as well.
    Before you say that this is all no-issue and there is nothing wrong with offering an app for downloading videos from adult websites, let me tell you that you will find no such tags or words mentioned anywhere on the developer's website:
    No NSFW words on developer's own website where these apps are offeredHere's what's going on...
    See, it is nearly impossible for a new website or application to rank for popular but highly competitive keywords like 'xyz downloader'. There are numerous websites and tools that let you download online videos from x number (orn XXX number) of websites.
    So this developer created a few downloader apps that have no special features, offered Flatpak versions for Linux users, published them on Flathub and tagged them with the NSFW keywords. With verified tag, the app looks more legit and tempting to download.
    It is easy for a highly reputed website like Flathub to rank highly for those terms.
    This way, a shrewd developer who would have never been able to get even 100 downloads on his own got more than 250,000 downloads.
    There are tons of good downloader applications for Linux. They can also use these keywords, but we only have apps made by a certain developer doing this. This is pure exploitation of the Flathub ecosystem.
    Flathub is not to be blamed here
    It's not entirely their fault that someone added NSFW words and used it to sell shady properitary apps. Although they should be more careful about such clear exploitation of their web reputation.
    Now, it may seem like I am making an issue out of nothing. Perhaps. I actually noticed this a few months ago. I wanted to write about it but then I decided to ignore a 'non-issue'. Few months later, Flathub was still ranking for all kind of this-hub, that-tube, xyz-hamster downloaders, and I could not tolerate it anymore.
    Lovely folks at Flatpak/Flathub/Fedora, please take note. My rant ends here.
  13. by: Chris Coyier
    Mon, 10 Nov 2025 18:00:39 +0000

    It’s interesting to me to think about during a lot of the web’s evolution, there were many different browser engines (more than there are now) and they mostly just agreed-on-paper to do the same stuff. We focus on how different things could be cross-browser back then, which is true, but mostly it all worked pretty well. A miracle, really, considering how unbelievably complicated browsers are.
    Then we got standards and specifications and that was basically the greatest thing that could have happened to the web. So we put on our blue beanies and celebrate that, which also serves as a reminder to protect these standards. Don’t let browsers go rogue, people!
    Then, still later, we actually got tests.
    In retrospect, yes, obviously, we need tests. These are now web-platform-tests (WPT), and they help all the browser engines make sure they are all doing the right thing. Amazing.
    (Side note: isn’t it obnoxious how many billions of dollars goes into newfangled browsers without any of them contributing or funding actual browser engine work?)
    I only recently just saw browserscore.dev by Lea Verou as well. Yet another tool to keep browsers honest. Frankly I’m surprised how low all browsers score on those tests. I read in one of Lea’s commit messages “We’re not WPT, we’re going for breadth not depth.” which I found interesting. The Browser Score tests run in the browser and pretty damn fast. I haven’t run them myself, but I have a feeling WPT tests take… a while.
    How can we improve on all this? Well a gosh-darn excellent way to do it is what the companies that make browsers have already been doing for a number of years: Interop. Interop is a handshake deal from these companies that they are going to get together and pick some great things that need better testing and fixed up implementations and then actually do that work. Interop 2025 looks like it went great again.
    It’s that time again now, and these browser companies are asking for ideas for Interop 2026. If you have something that bugs you how it works cross-browser, now is a great time to say so. Richard has some great ideas that seem like perfect fits for the task.
    Godspeed, ya’ll. We can’t all be like Keith and just do it ourselves.
  14. by: Neeraj Mishra
    Mon, 10 Nov 2025 16:40:16 +0000

    Creating and updating geo targeted APIs may seem easy, but there are countless challenges involved. Every country, every city, and every mobile network can respond differently and will require distinct adjustments. When pricing endpoints contain location-based compliance features and payment options, testing them will require more than one physical location. Proxies are a crucial part of the developer’s toolkit–they enable you to virtually “stand” in another country to observe what the users see.
    Developers encounter many problems when it comes to testing geo targeted APIs and it is the use of proxies that addresses this concern. In this article, we will outline the proxy use case and its benefits, the different proxy types, and potential challenges. We will maintain a practical approach so that you can pass it to a QA engineer or a backend developer and they will be able to use it directly.
    What Are Geo Targeted APIs and Why Do They Matter?
    A geo targeted API is an API that customizes its response according to clients’ geographical location. Such locations are primarily determined by an IP address, sometimes by headers, and in specific situations by account data. Streaming services provide different content to different countries, hotel booking systems adjust prices based on geographical location, ride-hailing apps change currency according to local clientele, and fintech apps restrict viewable payment services based on geographical payment regulations.
    Why are developers so focused on this? Such APIs also need to be consistent, compliant, and predictable, and for good reason. When users in Poland see prices in USD instead of the local PLN, or people in the UK see services that are not legally available to them, there are likely customer dissatisfaction, transaction failures, or, in the worst case, regulatory issues to deal with. Ensuring that geo logic is accurately tested is not optional; for anything that concerns money, content, or the law, it is essential built-in QA.
    If a team is based in a single location, it is predictable that all requests that they attempt are from that location. Mock the API is an option, but that will not give you enough information about what the real upstream service will return, and that’s critical information. A way to disguise requests as if they come from a different geographical location is necessary, that is the function of a proxy in this situation.
    Why Proxies Are the Easiest Way to Test Location-Based Responses?
    A proxy server acts as an intermediary that conveys your request to the target API and returns the response. One important element is the API only sees the proxy’s IP address and not yours. Assuming the proxy is in Germany, the API will think the request is coming from Germany; the same applies to Brazil, the API will see Brazil. A developer can use a good proxy pool to send an API request from 10 different countries and check if the API is working correctly.
    You also don’t have to set up test infrastructure in different regions. No cloud instances have to be set up in various geographies every time you want to test. You don’t have to rely on colleagues from different countries to participate in “just a check” test. Simply route the request through a different IP address and analyze the results.
    Another reason for the popularity of proxies in this task is that they work on the network level. There is no need to alter the API code itself, only the API caller needs to be changed. This enables QA engineers and backend developers to test production-like behavior without changing the production logic.
    Typical Workflow: How Developers Actually Use Proxies in Testing
    Let’s break down a realistic workflow you’d see in a team that regularly tests geo targeted APIs.
    Define the geo scenarios
    First, the team decides which locations they need to test: EU vs US, specific countries like UK, Canada, Germany, UAE, or mobile-only markets. This list often mirrors business logic in the API.
    Choose or rotate proxies for those locations
    The tester/developer picks proxy endpoints that match those locations. A good provider will offer a large choice of countries so you don’t have gaps in testing.
    Send the same API request through different proxies
    The team sends the same endpoint call – say, /v1/pricing?product=123 – but with the client configured to use different proxy IPs. The API should return different currencies, prices, availability, language, or content depending on the location.
    Capture and compare responses
    Responses are saved and compared either manually or with automated tests. If Germany and France receive the same content but they were supposed to be different, that’s a bug.
    Automate for regression
    Once the pattern is confirmed, the team bakes it into CI/CD or scheduled tests. Every time the API is deployed, the test suite calls it from multiple countries via proxies to ensure nothing broke.
    That’s the core idea: same request, different exit IP, compare output.
    Which Types of Proxies Are Best for Geo API Testing?
    Not all proxies are equal, and developers learn this quickly once they start hitting real services. Some APIs are strict, some are lenient, and some are downright suspicious of automated traffic. So choosing the right proxy type matters.
    Here is a simple comparison to help decide:
    Proxy TypeBest Use CaseProsConsDatacenter proxiesFast functional testing across many countriesHigh speed, good for automation, cheaperSome services detect them as non-residentialResidential proxiesTesting real-user conditions and stricter APIsHigh trust, looks like normal user trafficSlower, often more expensiveMobile proxiesTesting mobile-only features and app endpointsSeen as mobile users, great for app testingMost expensive, limited availabilityRotating proxiesLarge-scale multi-geo automated testingIP freshness, less blocking over many callsHarder to debug single fixed IP behaviour For most backend teams, datacenter proxies are enough to verify logic: does the API return EUR to a German IP and GBP to a UK IP? For QA teams testing production-like flows, residential or mobile proxies are better, because many modern APIs personalise content or apply security rules based on the perceived “realness” of the IP.
    If you need a flexible source of geo IPs for dev and QA, using a provider like proxys.io is convenient because you can pick locations on demand and plug them into your scripts without overcomplicated setup.
    Key Things Developers Test with Proxies
    Developers don’t use proxies for fun; they use them to answer very specific questions about how a geo targeted API behaves. Here are the most common areas they validate:
    Currency and localisation (USD vs EUR vs GBP, date formats, language headers)
    Regional availability (is this product/service actually shown in this market?)
    Compliance-based hiding (is restricted content hidden in specific countries?)
    Pricing tiers (do high-income regions get different price ladders?)
    Payment gateways (is a certain payment method visible in that country?)
    Feature flags tied to geography (e.g. features rolled out in 3 markets only)
    By running the exact same call through 5–10 different country proxies, the developer immediately sees if business rules are correctly encoded in the API.
    One Practical List: Best Practices for Using Proxies in API Testing
    Use HTTPS for all proxy traffic to avoid tampering and to mirror real-world usage.
    Keep a mapping of “country → proxy endpoint” in your test repo so tests are reproducible.
    Log the IP and country used for each test run – it makes debugging much easier.
    Don’t rely on just one IP per country; some APIs will cache responses per IP.
    Add assertions per country in automated tests (“if country=DE, expect currency=EUR”).
    Rotate or refresh proxies periodically to avoid stale or blocked IPs.
    Document test coverage so product owners know which countries are actually being tested.
    This is the kind of hygiene that turns proxies from an ad-hoc trick into a stable part of your QA pipeline.
    How to Integrate Geo Proxy Testing into Automated Pipelines
    A lot of teams start by testing manually with a proxy in Postman, Insomnia, or curl. That’s fine for discovery, but not enough for long-term reliability. The real win is when you add multi-geo tests into CI/CD so every deployment checks location-based behaviour automatically.
    The pattern is straightforward:
    Your test suite has a list of target countries.
    For each country, the test runner sets the proxy configuration.
    The runner calls the API and captures the response.
    The test compares the response to the expected shape/content for that country.
    If even one country fails (for example, Canada doesn’t get CAD), the pipeline fails.
    When proxies provide a simplified network-level interface, it is compatible with virtually any language or testing framework, be it JavaScript (Axios, node-fetch), Python (requests), Java (HttpClient), Go (http.Client with transport), or even a cURL-based Bash script. It is a matter of setting the proxy for each request. 
    This is extremely useful for teams implementing progressive geo-release features. Suppose the marketing team wants to release a feature in the UK and Germany, but not in the US. Your continuous integration system could enforce this rule. If the US suddenly gets the feature, the build fails. That is control.
    Common Pitfalls and How to Avoid Them
    While proxy-based testing is simple in principle, developers do hit some recurring issues:
    1. API uses more than IP to detect location
    Some APIs also look at Accept-Language, SIM/Carrier data (for mobile), or account settings. If you only change IP, you might not trigger all geographic branches. Solution: mirror headers and user profile conditions where possible.
    2. Caching hides differences
    If the upstream service caches by URL only (not by IP), you might get the same response even when changing country. Solution: add cache-busting query params or ensure the API is configured to vary by IP.
    3. Using free or low-quality proxies
    Unreliable proxies cause false negatives – timeouts, blocked IPs, or wrong countries. For testing business logic, stable and correctly geo-located IPs matter more than saving a dollar.
    4. Forgetting about time zones
    Some services couple geo logic with local time. If you test only the IP but not the time window, you might think the feature is missing. Document time-based rules separately.
    5. Not logging proxy usage
    When someone reports “Germany didn’t get the right prices”, you need to know which IP you used. Always log the proxy endpoint and country for traceability.
    Avoiding these mistakes makes geo testing with proxies extremely reliable.
    Why Proxies Beat Manual Remote Testing
    You could ask a colleague in Spain to click your link. You could set up cloud instances in 12 regions. You could even travel. But those options are slow, expensive, and not repeatable. Proxies, on the other hand:
    Work instantly from your current location
    Scale to as many countries as your provider supports
    Can be run in CI/CD, not just manually
    Are independent from your personal device or IP
    Are easy to rotate if one IP is blocked
    From an engineering point of view, they’re simply the most automatable way to emulate different user geographies.
    Conclusion: Proxies Turn Geo Testing into a Repeatable Process
    There are geo-targeted APIs everywhere – commerce, content, fintech, mobility, gaming, SaaS. Any product you operate in multiple countries will eventually have to solve the question, “What does this look like for users in X?” Proxies give the cleanest way for developers to programmatically answer this question. 
    Developers can check whether prices, currencies, languages, availability, and compliance rules behave as expected by changing the same API call to use different country IPs. With a good proxy provider, you can turn this from a one-off debugging technique into a standard check in your testing process. 
    The conclusion is straightforward: If the API logic is based on the user’s location, so must the testing be. Proxies are the way to achieve this from your desk.
    The post How Developers Use Proxies to Test Geo Targeted APIs? appeared first on The Crazy Programmer.
  15. by: Sourav Rudra
    Mon, 10 Nov 2025 14:59:22 GMT

    Humble Bundle has a Linux collection (partner link) running right now that's kind of hard to ignore. Twenty-two books covering everything from "how do I even install this" to Kubernetes orchestration and ARM64 reverse engineering. All from Apress and Springer; this means proper technical publishers, not some random self-published stuff.
    Humble Tech Book Bundle: Linux for Professionals by Apress/SpringerUnlock essential resources for Linux—get a professional edge on the competition with a little help from the experts at Apress & Springer!Humble BundleIf you decide to go ahead with this bundle, your money will go to support Room to Read, a non-profit that focuses on girls' literacy and education in low-income communities.
    ⏲️ The last date for the deal is November 24, 2025.
    📋This article contains affiliate links. Please read our affiliate policy for more information.So, What's in The Bundle?
    First off, the "Zero to SysAdmin" trilogy. Using and Administering Linux: Volume 1 covers installation and basic command line usage. Volume 2 goes into file systems, scripting, and system management. Volume 3 focuses on network services like DNS, DHCP, and email servers.
    The Kubernetes coverage includes three books. Deploy Container Applications Using Kubernetes covers microk8s and AWS EKS implementations. Ansible for Kubernetes by Example shows cluster automation. Kubernetes Recipes provides solutions for common deployment scenarios. Plus Certified Kubernetes Administrator Study Companion if you're prepping for the CKA exam.
    systemd for Linux SysAdmins explains the init system and service manager used in modern distributions. It covers unit files, service management, and systemd components.
    For low-level work, there's Assembly Language Reimagined for Intel x64 programming on Linux. Foundations of Linux Debugging, Disassembling, and Reversing covers x64 architecture analysis. Foundations of ARM64 Linux Debugging, Disassembling, and Reversing does the same for ARM64.
    Linux Containers and Virtualization covers container implementation using Rust. Oracle on Docker explains running Oracle databases in containers. Supercomputers for Linux SysAdmins covers HPC cluster management and hardware.
    Yocto Project Customization for Linux is for building custom embedded Linux distributions. Pro Bash is a shell scripting reference. Introduction to Ansible Network Automation covers network device automation.
    The Enterprise Linux Administrator and Linux System Administration for the 2020s both cover current sysadmin practices. Practical Linux DevOps focuses on building development labs. CompTIA Linux+ Certification Companion is exam preparation material. Linux for Small Business Owners covers deploying Linux in small business environments.
    What Do You Get for Your Money?
    All 22 books are available as eBooks in PDF and ePub formats. They should work on most modern devices, ranging from computers and smartphones to tablets and e-readers.
    Here's the complete collection. 👇
    Column 1 Column 2 CompTIA Linux + Certification Companion Introduction to Ansible Network Automation Certified Kubernetes Administrator Study Companion Pro Bash Yocto Project Customization for Linux Linux Containers and Virtualization Using and Administering Linux: Volume 1 Foundations of ARM64 Linux Debugging, Disassembling, and Reversing Using and Administering Linux: Volume 2 Foundations of Linux Debugging, Disassembling, and Reversing Using and Administering Linux: Volume 3 Deploy Container Applications Using Kubernetes systemd for Linux SysAdmins Ansible for Kubernetes by Example Assembly Language Reimagined Linux for Small Business Owners Kubernetes Recipes Linux System Administration for the 2020s Oracle on Docker Practical Linux DevOps Supercomputers for Linux SysAdmins The Enterprise Linux Administrator There are three pricing tiers here:
    $1 tier: Two books: Linux System Administration for the 2020s and Practical Linux DevOps. Both focus on current practices. Not bad for a dollar.
    $18 tier: Adds three more books covering Kubernetes, Ansible automation, and DevOps stuff. Five books total.
    $25 tier: All 22 books. This is where you get the whole bundle.
    These books are yours to keep with no DRM restrictions. Head over to Humble Bundle (partner link) to grab the collection before the deal expires.
    Get The Deal (partner link)
  16. by: Geoff Graham
    Mon, 10 Nov 2025 14:44:13 +0000

    A few links about headings that I’ve had stored under my top hat.
    “Page headings don’t belong in the header”
    Martin Underhill:
    A classic conundrum! I’ve seen the main page heading (<h1>) placed in all kinds of places, such as:
    The site <header> (wrapping the site title) A <header> nested in the <main> content A dedicated <header> outside the <main> content Aside from that first one — the site title serves a different purpose than the page title — Martin pokes at the other two structures, describing how the implicit semantics impact the usability of assistive tech, like screen readers. A <header> is a wrapper for introductory content that may contain a heading element (in addition to other types of elements). Similarly, a heading might be considered part of the <main> content rather than its own entity.
    So:
    <!-- 1️⃣ --> <header> <!-- Header stuff --> <h1>Page heading</h1> </header> <main> <!-- Main page content --> </main> <!-- 2️⃣ --> <main> <header> <!-- Header stuff --> <h1>Page heading</h1> </header> <!-- Main page content --> </main> Like many of the decisions we make in our work, there are implications:
    If the heading is in a <header> that is outside of the <main> element, it’s possible that a user will completely miss the heading if they jump to the main content using a skip link. Or, a screenreader user might miss it when navigating by landmark. Of course, it’s possible that there’s no harm done if the first user sees the heading prior to skipping, or if the screenreader user is given the page <title> prior to jumping landmarks. But, at worst, the screenreader will announce additional information about reaching the end of the banner (<header> maps to role="banner") before getting to the main content. If the heading is in a <header> that is nested inside the <main> element, the <header> loses its semantics, effectively becoming a generic <div> or <section>, thus introducing confusion as far as where the main page header landmark is when using a screenreader. All of which leads to Martin to a third approach, where the heading should be directly in the <main> content, outside of the <header>:
    <!-- 3️⃣ --> <header> <!-- Header stuff --> </header> <main> <h1>Page heading</h1> <!-- Main page content --> </main> This way:
    The <header> landmark is preserved (as well as its role). The <h1> is connected to the <main> content. Navigating between the <header> and <main> is predictable and consistent. As Martin notes: “I’m really nit-picking here, but it’s important to think about things beyond the visually obvious.”
    Read article “Fluid Headings”
    Donnie D’Amato:
    To recap, we’re talking about text that scales with the viewport size. That usually done with the clamp() function, which sets an “ideal” font size that’s locked between a minimum value and a maximum value it can’t exceed.
    .article-heading { font-size: clamp(<min>, <ideal>, <max>); } As Donnie explains, it’s common to base the minimum and maximum values on actual font sizing:
    .article-heading { font-size: clamp(18px, <ideal>, 36px); } …and the middle “ideal” value in viewport units for fluidity between the min and max values:
    .article-heading { font-size: clamp(18px, 4vw, 36px); } But the issue here, as explained by Maxwell Barvian on Smashing Magazine, is that this muffs up accessibility if the user applies zooming on the page. Maxwell’s idea is to use a non-viewport unit for the middle “ideal” value so that the font size scales to the user’s settings.
    Donnie’s idea is to calculate the middle value as the difference between the min and max values and make it relative to the difference between the maximum number of characters per line (something between 40-80 characters) and the smallest viewport size you want to support (likely 320px which is what we traditionally associate with smaller mobile devices), converted to rem units, which .
    .article-heading { --heading-smallest: 2.5rem; --heading-largest: 5rem; --m: calc( (var(--heading-largest) - var(--heading-smallest)) / (30 - 20) /* 30rem - 20rem */ ); font-size: clamp( var(--heading-smallest), var(--m) * 100vw, var(--heading-largest) ); } I couldn’t get this working. It did work when swapping in the unit-less values with rem. But Chrome and Safari only. Firefox must not like dividing units by other units… which makes sense because that matches what’s in the spec.
    Anyway, here’s how that looks when it works, at least in Chrome and Safari.
    CodePen Embed Fallback Read article Style :headings
    Speaking of Firefox, here’s something that recently landed in Nightly, but nowhere else just yet.
    Alvaro Montoro:
    :heading: Selects all <h*> elements. :heading(): Same deal, but can select certain headings instead of all. I scratched my head wondering why we’d need either of these. Alvaro says right in the intro they select headings in a cleaner, more flexible way. So, sure, this:
    :heading { } …is much cleaner than this:
    h1, h2, h3, h4, h5, h6 { } Just as:
    :heading(2, 3) {} …is a little cleaner (but no shorter) than this:
    h2, h3 { } But Alvaro clarifies further, noting that both of these are scoped tightly to heading elements, ignoring any other element that might be heading-like using HTML attributes and ARIA. Very good context that’s worth reading in full.
    Read article Headings: Semantics, Fluidity, and Styling — Oh My! originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  17. by: Sourav Rudra
    Mon, 10 Nov 2025 12:35:00 GMT

    Microsoft's proprietary formats like .doc and .docx dominate the office productivity landscape. Most people and organizations rely on these formats for daily document work. This creates a predatory situation where vendor lock-in is the norm and compatibility issues are taken as a omen that moving away from Microsoft Office is a bad idea.
    OpenDocument Format (ODF) offers an open alternative. It is an ISO-standard XML-based format for text documents, spreadsheets, presentations, and graphics. ODF works across multiple office suites, including LibreOffice, Collabora Online, and Microsoft Office itself.
    The format operates under the OASIS Open umbrella, a nonprofit consortium that develops open standards and open source projects. It brings together individuals, organizations, and governments to solve technical challenges through collaboration.
    Coming after four years of development work, OASIS Open has introduced ODF 1.4, marking a major milestone during ODF's 20th anniversary as an OASIS Standard.
    ODF 1.4 Packs in Many Upgrades
    The development involved contributions from multiple organizations. Engineers from Collabora, The Document Foundation, IBM, Nokia, Microsoft, and KDE participated. Community members from the LibreOffice project also made significant contributions.
    As for the major improvements of this release, tables can now be placed inside shapes, breaking free from the textbox-only limitation. This bridges a compatibility gap with Microsoft's OOXML and other file formats, making cross-format workflows smoother.
    Accessibility gets meaningful upgrades through decorative object marking. Images and shapes can be flagged as decorative, instructing screen readers to skip them. This eliminates clutter for assistive technology users navigating complex documents.
    A new overlap prevention property helps manage document layout. Anchored objects can now specify whether they need to avoid sitting on top of other elements. This gives users finer control over how images and shapes interact on a page.
    Text direction support improves with 90-degree counter-clockwise rotation. Content can now flow left to right, then top to bottom, in this rotated orientation. The addition complements the existing clockwise direction commonly used for Japanese text layouts.
    Michael Stahl, Senior Software Engineer at Collabora Productivity, explained the development approach:
    For a Closer Look
    The complete ODF 1.4 specification is available on the OASIS Open documentation website. The specification consists of four numbered documents covering different aspects of the standard.
    Part 1 provides the introduction and master table of contents. Part 2 defines the package format language. Part 3 contains the XML schema definitions. Part 4 specifies the formula language for spreadsheet calculations.
    ODF 1.4Suggested Read 📖
    Ownership of Digital Content Is an Illusion—Unless You Self‑HostPrices are rising across Netflix, Spotify, and their peers, and more people are quietly returning to the oldest playbook of the internet: piracy. Is the golden age of streaming over?It's FOSSTheena Kumaragurunathan
  18. by: Roland Taylor
    Mon, 10 Nov 2025 05:30:39 GMT

    If you love working in the terminal or just want something fast and lightweight for calendar management, Calcurse gives you a full organiser you can use right in your shell. As its name suggests, Calcurse uses ncurses to deliver a complex command-line interface that rivals some GUI apps in features and efficiency.
    0:00 /1:05 1× If you don't need automated reminders and/or the overhead of a database, it's great for keeping track of your appointments and to-do lists. Being lightweight, it works well in server environments over SSH, and is a great candidate for those using low-powered devices.
    Understanding Calcurse at a glance
    The standard Calcurse interface in actionCalcurse is written in C, and boasts robust support for scripting and helper tools. It supports many of the features you'd expect in a GUI calendar app, including iCalendar (.ics) import/export, as well as some you may never have thought of. It should bring back some nostalgia if you were around during the early days of computing (DOS, early Unix, etc), where text-based user interface (TUI) apps were predominant, and complex, keyboard-driven interfaces were actually the norm.
    📋I can't cover everything about Calcurse here, since it's got way too many features for a single article. If you're interested in trying it out, check out the documentation.Calcurse operates in three main forms:
    An interactive ncurses interface: the standard Calcurses interface that you get by running the calcurses command with no arguments or flags. A non-interactive TUI: prints output according to parameters, and exits. Called by passing flags like --status. A background daemon: must first be enabled from the ncurses interface or run with --daemon, can be ended by starting the interactive interface or by using pkill calcurse. Most actions are a single keystroke away, with on-screen prompts and a simple help/config menu when you need it. Once the shortcuts click, navigation is quick and predictable.
    Where most calendar apps store your data in a database, Calcurse uses plain text files on the backend. This choice keeps it snappy, easy to back up, and instantly responsive to your changes. At this time, Calcurse can only show one calendar per instance, so if you'd like to have multiple calendars, you'll need to run different instances, each connected to a different calendar file (with -c) and data directory (with -D).
    Notifcations and sync? Check!
    Calcurse supports notifications within its ncurses UI or by running a custom command (such as your system's mailer or your desktop environment's own notification system). By default, Calcurse does not run as a daemon (background process), so as long as you're not actively running it, it uses no additional system resources.
    However, being as versatile as it is, you can enable daemon mode so Calcurse can deliver notifications even after you quit the UI. Launching the UI typically stops the daemon to avoid conflicting instances, unless using the --status flag. To avoid this, you can run Calcurse as a separate instance or query it using the appropriate flags without bringing up the UI. If you'd prefer a more hands-on approach, you can set up cron jobs and scripting to interact with the non-interactive mode for the same purposes.
    iCalendar import/export is built into the native app itself and can be invoked with "i" (for import) or "x" (for export). CalDAV sync is also supported, but requires a third-party helper (calcurse-caldav). It's still considered alpha-quality software, and does require its own database, so syncing between Calcurse instances may be a little trickier here.
    Going deeper on syncing
    Perhaps one of the coolest parts of using a tool like Calcurse is that since everything is kept in plain text, you can use version control for just about everything: from configurations to schedules. If you have a certain schedule you'd like to sync between your devices, you'd just need to store your ~/.config/.calcurse~ and ~/.local/share/calcurse folders in a Git repo or your personal Nextcloud server, for instance.
    You could have the actual folder stored in your sync location and have Calcurse read from a symlink. This way, you could manually edit your configuration from anywhere, and have it automatically sync to every device where you use Calcurse. Pretty handy for power users with many devices to manage.
    Customisation and quality-of-life
    Customizing the colour theme in Calcurse is easyWith how many advanced features Calcurse offers, you may not be too surprised to learn that it supports a degree of customisation (in interactive mode), accessible through the config menu. You can change the colours and layout, or choose the first day of the week. You can also enable various quality of life features, like autosave and confirmations.
    If you don't like the standard key bindings, you can set your own, which is quite handy for those who may have certain preferences. For example, you can bind a custom key for jumping between views. If you're running Calcurse in a terminal emulator under Wayland, it's especially useful. You won't need to worry about running into conflicts over hotkeys in your desktop environment.
    Changing views
    Calcurse with the calendar in week viewIf you'd like to change how the calendar is displayed, you can change the appearannce.calendarview option in the config between monthly and weekly. In weekly view, the number of the week is shown in the top right-corner of the calendar. There's no way to enable this in the monthly view, it shows the day of the year instead.
    Creating an appointment with the calendar in month viewIf you'd like to show notifications in Calcurse itself, you can toggle the notification bar with the appearance.notifybar option. I didn't test notifications in this way, as I'd prefer to set up system integration.
    Where Calcurse might not be for you
    Of course, as powerful as it is, Calcurse does have some quirks and shortcomings that may be an issue for some users. For instance, it does not support any fancy views or month-grid editing like many GUI calendar tools. To be fair, the default interface is simple enough to be comfortable to use once you get used to it, but if you need these additional features, you're out of luck.
    One other quirk is that the 12-hour time format is not globally supported throughout the app. The interactive list uses the hh:mm format, whereas the notification bar and CLI output can be switched to the 12 hr format. The rest of the app displays its time in the 24 hr format. Changing the format where you are allowed to isn't trivial, so be prepared to consult the documentation for this one.
    The format quirks also show up in how you choose certain display units for dates. Unless you're well versed in these, you might find yourself consulting the documentation often. This could be off-putting for some users, even terminal lovers who prefer the TUI over everything else. It's also inconsistent in this way, since format.inputdate uses simple numbers in the config, whereas format.dayheading uses the less familiar "%-letter" format.
    Overall, even if you like working on the command-line, the learning curve for Calcurse can be a little steep. That said, once you get acclimated, the key-driven TUI is actually comfortable to work with, and the high range of features would make it a great tool for those who like to build custom solutions on top of headless apps.
    Getting Calcurse on your distro
    Calcurse is packaged for many distros, including Debian/Ubuntu, Arch, Fedora, and others, as well as their derivatives, of course. You can search for calcurse in your software manager (if it supports native packages) or use your standard installation commands to install it:
    Debian/Ubuntu/Mint:
    sudo apt install calcurse Fedora:
    sudo dnf install calcurseArch:
    sudo pacman -S calcurseHowever, if you're looking to build from source, you can grab up-to-date source releases from the Calcurse downloads page, pull the latest code from the project's GitHub page.
    📋Calcurse does not track releases on its GitHub page. If you pull from Git, you're essentially pulling the development branch.Conclusion
    Calcurse is a rare gem: a powerful, straightforward TUI calendar management app with support for iCal import/export, CalDAV sync, and scriptable reports. If you live in the shell, manage servers over SSH, or want plain-text data you can version, it's a reasonable solution. Sure, there are real trade-offs: no month-grid, a slight learning curve, and 12-hour time relegated to the notification bar and output. For terminal-first users, it is an easy recommendation.
  19. by: Theena Kumaragurunathan
    Sun, 09 Nov 2025 03:44:40 GMT

    Privacy is a practice. I treat it like tidying my room. A little attention every weekend keeps the mess from becoming a monster. Here are seven wins you can stack in a day or two, all with free and open source tools.
    1. Harden your browser
    Firefox is still the easiest place to start. Install uBlock Origin, turn on strict tracking protection, and only whitelist what you truly need. Add NoScript if you want to control which sites can run scripts.
    Why it matters: Most tracking starts in the browser. Blocking it reduces profiling and drive‑by nasties. How to do it: In Firefox settings, set Enhanced Tracking Protection to Strict. Install uBlock Origin. If you’re comfortable, install NoScript and allow scripts only on trusted sites. Trade‑off: Some pages break until you tweak permissions. You’ll learn quickly which sites respect you. 2. Search without surveillance
    Shift your default search to privacy‑respecting frontends and engines. SearXNG is a self‑hostable metasearch. Startpage, if you want something similar to Google, although excessive ads on their search page is a turn-off.
    Why it matters: Your searches reveal intent and identity. Reducing data capture lowers your footprint. How to do it: Set your browser’s default search to DuckDuckGo or Startpage or a trusted SearXNG instance. Consider hosting SearXNG later if you enjoy tinkering. Trade‑off: Results can feel slightly different from Google. For most queries, they’re more than enough. 📋The article contains some partnered affiliate links. Please read our affiliate policy.3. Block ads and trackers on your network
    A Pi‑hole or AdGuard Home (partner link) box filters ads for every device behind your router. It’s set‑and‑forget once configured. AdGuard is not open source but a trusted mainstream service.
    Why it matters: Network‑level filtering catches junk your browser misses and protects smart TVs and phones. How to do it: Install Pi‑hole or AdGuard Home on a Raspberry Pi or a spare machine. Point your router’s DNS to the box. Trade‑off: Some services rely on ad domains and may break. You can whitelist specific domains when needed. Photo by Matthew Henry / Unsplash4. Private DNS and a lightweight VPN
    Encrypt DNS with DNS‑over‑HTTPS and use WireGuard for a fast, modern VPN. Even if you only use it on public Wi‑Fi, it’s worth it.
    Why it matters: DNS queries can expose your browsing. A VPN adds another layer of transport privacy. How to do it: In Firefox, turn on DNS‑over‑HTTPS. Set up WireGuard with a reputable provider or self‑host if you have a server. Trade‑off: A tiny speed hit. Misconfiguration can block certain services. Keep a fallback profile handy. 5. Secure messaging that respects you
    Signal is my default for personal chats. It’s simple, secure, and widely adopted. The desktop app keeps conversations synced without drama.
    Why it matters: End‑to‑end encryption protects content even if servers are compromised. How to do it: Install Signal on your phone, then link the desktop app. Encourage your inner circle to join. Trade‑off: Not everyone will switch. That’s fine. Use it where you can. 6) Passwords and 2FA, properly
    Store strong, unique passwords in KeePassXC and use time‑based one‑time codes. You’ll never reuse a weak password again. Use ProtonPass if you want a more mainstream option.
    Why it matters: Credential stuffing is rampant. Unique passwords and 2FA stop it cold. How to do it: Create a KeePassXC vault, generate 20‑plus character passwords, and enable TOTP for accounts that support it. Back up the vault securely. Trade‑off: A small setup hurdle. After a week, it becomes second nature. Top 6 Best Password Managers for Linux [2024]Linux Password Managers to the rescue!It's FOSSAnkush Das7) Email with privacy in mind
    Use ProtonMail for personal email. Add aliasing to keep your main address clean. For newsletters, pipe them into an RSS reader so your inbox isn’t a tracking playground.
    Why it matters: Email carries identity. Aliases cut spam, and RSS limits pixel tracking. How to do it: Create a Proton account. Use aliases for sign‑ups. Subscribe to newsletters via RSS feeds if available or use a privacy‑friendly digest service. Trade‑off: Some newsletters force email only. Accept a separate alias or unsubscribe. Good, Better, Best
    Browser
    Good: Firefox with uBlock Origin.
    Better: Add NoScript and tweak site permissions.
    Best: Harden about:config and use containers for logins. Search
    Good: Startpage as default.
    Better: Use a trusted SearXNG instance.
    Best: Self‑host SearXNG and monitor queries. Network filtering
    Good: Pi‑hole or AdGuard Home on a spare device.
    Better: Add curated blocklists and per‑client rules.
    Best: Run on a reliable server with automatic updates and logging. DNS and VPN
    Good: Browser DNS‑over‑HTTPS.
    Better: System‑wide DoH or DoT.
    Best: WireGuard with your own server or a vetted provider. Messaging
    Good: Signal for core contacts.
    Better: Encourage groups to adopt.
    Best: Use disappearing messages and safety numbers. Passwords and 2FA
    Good: KeePassXC vault and TOTP for key accounts.
    Better: Unique passwords everywhere and hardware‑encrypted backups.
    Best: Hardware tokens where supported plus KeePassXC. Email
    Good: Proton for personal mail.
    Better: Aliases per service.
    Best: RSS for newsletters and strict filtering rules. Time to implement
    Quick wins: Browser hardening, search swap, Signal setup. About 60 to 90 minutes. Medium: KeePassXC vault, initial 2FA rollout. About 90 minutes. Weekend projects: Pi‑hole or AdGuard Home, WireGuard. About 3 to 5 hours depending on your comfort. Conclusion
    Start with what you control. The browser, your passwords, your default search. Privacy is cumulative. One small change today makes the next change easier tomorrow. If you keep going, the internet feels calmer, like you finally opened a window in a stuffy room.
  20. by: Abhishek Prakash
    Sat, 08 Nov 2025 17:52:50 +0530

    Learn by doing, not just reading or watching.
    Pen-testing can’t be mastered by watching videos or reading blogs alone. You need to get your hands dirty.
    Pentora Box turns each Linux Handbook tutorial into a self-try exercise. Every lab gives you a realistic, safe environment where you can explore reconnaissance, scanning, exploitation, and post-exploitation, step by step.
    How to use it?
    Curious how you can get started with ethical hacking and pen-testing for free with these hands-on labs? It's easy. Here's what you need to do:
    Step 1: Pick a lab to practice
    Choose from a curated list of hands-on pen-testing exercises, from OSINT to exploitation. The labs are not in a particular order but it would be a good practice to follow:
    🧭 Reconnaissance Track: To scout the target for attack surface and vulnerabilities ⚔️ Exploitation Track: Simulate attack after finding vulnerabilities. 🛡️ Defense Track: Monitor your system and network and harden up your defenses. Step 2: Set up locally
    Each lab includes setup instructions. It's good to use Kali Linux, as it often includes the required tools. You can also use Debian or Ubuntu based distributions, as the package installation command will work the same. Sure, you can try it on any Linux distro as long as you manage to install the packages.
    Labs are safe to perform as they are performed on VulnHub, a platform dedicated for pen-testing exercises.
    Step 3: Execute and learn
    Run commands, observe output, fix errors, and build muscle memory, the hacker way. The tutorials explain the output so that you can understand what's going on and what you should be focusing on after running the commands.
    💡Each lab is designed for localhost or authorized test targets. No external attacks. Always hack responsibly.Before you start: Setting up your practice environment
    You don’t need a dedicated server or paid sandbox to begin. All labs can be practiced on your Linux system or a virtual machine.
    Recommended Setup:
    🐧 Kali Linux/ParrotOS/Debian/Ubuntu + tools 🐳 Docker (for local vulnerable targets) ⚙️ VS Code or terminal-based editor 🔒 Good ethics: always test in legal environments 🚧These labs are designed for educational use on local or authorized environments only. Never attempt to exploit real systems without permission. Always respect the principles of responsible disclosure and digital ethics.Stay in touch for future labs
    New labs are added regularly. Subscribe to get notified when a new tool, challenge, or lab goes live. You can also share your results or request new topics in our community forum or newsletter.
  21. by: Abhishek Prakash
    Fri, 07 Nov 2025 18:12:51 +0530

    After publishing the Linux Networking at Scale, while we work on the new course, I am proud to present a super long but immensely helpful hands-on guide that shows you the steps from creating an open source project to submitting it to CNCF. The guide is accesible to members of all levels.
    Building and Publishing an Open Source Project to CNCFA hands-on guide to creating, documenting, and submitting an open source project to the CNCF Landscape.Linux HandbookSachin H RSachin, author of our Kubernetes Operators course, faced the lack of organized document when he worked on his project, Kubereport. He shared his personal notes in the form of a guide with some sample codes.
    Please note that this is more suitable for Kubernetes and Cloud Native projects.
    Here's why you should get LHB Pro membership:
    ✅ Get access to Linux for DevOps, Docker, Ansible, Systemd and other text courses
    ✅ Get access to Kubernetes Operator and SSH video course
    ✅ Get 6 premium books on Bash, Linux and Ansible for free
    ✅ No ads on the website
    Get Pro Membership  
     
      This post is for subscribers only
    Subscribe now Already have an account? Sign in
  22. by: Sachin H R
    Fri, 07 Nov 2025 17:49:50 +0530

    The idea for a practical guide to build an open source project and publishing it to CNCF came to me when I was working on KubeReport, an open source tool that automatically generates PDF/CSV deployment reports from your Kubernetes cluster. It is designed for DevOps teams, QA, and managers. It can auto-email reports, integrate with Jira, and track exactly what got deployed and when.
    I noticed that there were not clear enough documentation on how to create a project that adheres to CNCF standards. And thus I created this guide from the experience I gained with KubeReport.
    💡I have created a small project, KubePRC (Pod Restart Counter), for you to practice hands-on Kubernetes concepts before building your own open-source products in any programming language. I presume that you are familiar with some sort of coding and GitHub. Explaining those things are out of scope for this guide.Step 0: Ask yourself first: Why are you building the project?
    Before you start building anything, be clear about why are you doing it? Think about these three points:
    market gap market trend long-term vision. Let me take the example of my existing project KubeReport again.
    🌉 Market Gap - Automatic reports post deployment
    In fast-moving environments with 40–50 clients, deployments happen every day. After deployment, we often rely on manual smoke tests before involving QA. But issues like:
    Missed service failures No track of deployment count No visibility into images or teams involved ...often surface only after clients report problems.
    This is not just a company-specific issue. These gaps are common across the DevOps world.
    KubeReport fills that gap. It provides a centralized, auditable report after every deployment — in the form of downloadable PDFs or CSVs — sent automatically to managers, clients, Jira tickets and email groups.
    📈 Market Trend – Rising demand for AI-driven automation
    As DevOps matures, there's an increasing demand for:
    Lightweight, CLI-based tools integrated into pipelines Immediate post-deployment health visibility Intelligent automation and alerting systems 🤖 Future Scope – AI-powered task automation
    In the long term, the goal is to reduce manual intervention by integrating AI to:
    Detect anomalies in restart counts based on historical deployment trends Automatically classify failures (e.g., infra-related vs. app-related) Generate intelligent deployment health reports Recommend or trigger self-healing actions (like auto-restart, scaling, or rollback) These enhancements will empower teams to act faster with minimal manual input — reducing human error and increasing confidence in every release.
    This is how I outlined it before creating the KubeReport tool. You get the gist. You should build a tool that not only solves real problems but also has a future scope for improvements.
    🔍 Step 1: Check if your idea already exists
    Before building KubeReport, we asked:
    If an idea already exists — for example, a MySQL Operator — you have three options:
    Don’t build it Build a better version Solve it for a different target (e.g., MongoDB or Postgres) In our case, there was no specific open-source tool that automated Kubernetes deployment reports like this — so we started building.
    💻 Step 2: Language & tech stack selection
    Kubernetes is written in Go, which makes it a strong choice for any native integration. Our goals were:
    Fast performance Access to client libraries Ease of deployment So we used:
    Go for core logic Kubernetes APIs to fetch pod/deployment data Go PDF/CSV libraries for report generation 📋You can adapt your stack based on your needs. Choose what offers good performance, community support, and personal comfort.🧩 Step 3: Design the Architecture
    If your project has multiple components (like frontend, backend, APIs, and DB), architecture diagrams can be very useful.
    I recommend:
    Miro or Whimsical for quick architecture and flow diagrams Weekly planning: what to build this week, next, and this month Breaking down work into phases keeps the project manageable.
    🛠️ Step 4: Start Small – "Hello World" First
    Always begin with a small, functional unit.
    For example, the first version of KubeReport:
    Listed pods from Kubernetes Generated a simple PDF That was it. Later, we added:
    CSV format Deployment filters Auto-email feature Cloud storage ✅ One step at a time. Build a small working thing, then grow.
    Let's see all this with a sample project. Feel free to replicate the steps.
    Building kubeprc (Kube Pod Restart Counter)
    Let’s take kubeprc as an example project for hands-on practice.
    kubeprc (short for Kube Pod Restart Counter) is a lightweight open source tool that scans your Kubernetes cluster and reports on pod restart counts. It’s ideal for DevOps and SRE teams who need to monitor crash loops and container stability — either in real-time or as part of automated checks.
    Why are we building this?
    As part of early discovery (Step 1 & Step 2), we identified a clear market gap:
    There was no simple, focused tool that:
    Counted pod restarts across a cluster Worked both locally and in-cluster Could be deployed cleanly via Helm Was lightweight and customizable While 2–3 similar tools exist, they either lack flexibility or are too heavy. Our aim is to build:
    A focused tool with a clean CLI Extra features based on real-world DevOps use cases A Helm chart for seamless integration in CI/CD pipelines or monitoring stacks Feature Planning
    I used Miro for feature plans layout. You can use any project management tool of your choice.
    Tech Stack
    Kubernetes is written in Go, so client libraries and API access are very well supported. Go offers great performance, concurrency, and portability.
    Tool Purpose Go Core CLI logic & Kubernetes client Docker Containerization for portability Helm Kubernetes deployment automation Minikube / Cloud (Azure/GCP/AWS) Local & cloud testing environments  
     
      This post is for subscribers only
    Subscribe now Already have an account? Sign in
  23. by: Geoff Graham
    Thu, 06 Nov 2025 15:57:49 +0000

    Here’s something you’ll spot in the wild:
    <div class="btn" role="button">Custom Button</div> This is one of those code smells that makes me stop in my tracks because we know there’s a semantic <button> element that we can use instead. There’s a whole other thing about conflating anchors (e.g., <a class="btn">) and buttons, but that’s not exactly what we’re talking about here, and we have a great guide on it.
    A semantic <button> element makes a lot more sense than reaching for a <div> because, well, semantics. At least that’s what the code smell triggers for me. I can generically name some of the semantical benefits we get from a <button> off the top of my head:
    Interactive states Focus indicators Keyboard support But I find myself unable to explicitly define those benefits. They’re more like talking points I’ve retained than clear arguments for using <button> over <div>. But as I’ve made my way through Sara Soueidan’s Practical Accessibility course, I’m getting a much clearer picture of why <button> is a best practice.
    Let’s compare the two approaches:
    CodePen Embed Fallback Did you know that you can inspect the semantics of these directly in DevTools? I’m ashamed to admit that I didn’t before watching Sara’s course.
    There’s clearly a difference between the two “buttons” and it’s more than visual. Notice a few things:
    The <button> gets exposed as a button role while the <div> is a generic role. We already knew that. The <button> gets an accessible label that’s equal to its content. The <button> is focusable and gets a click listener right out of the box. I’m not sure exactly why someone would reach for a <div> over a <button>. But if I had to wager a guess, it’s probably because styling <button> is tougher that styling a <div>. You’ve got to reset all those user agent styles which feels like an extra step in the process when a <div> comes with no styling opinions whatsoever, save for it being a block-level element as far as document flow goes.
    I don’t get that reasoning when all it take to reset a button’s styles is a CSS one-liner:
    CodePen Embed Fallback From here, we can use the exact same class to get the exact same appearance:
    CodePen Embed Fallback What seems like more work is the effort it takes to re-create the same built-in benefits we get from a semantic <button> specifically for a <div>. Sara’s course has given me the exact language to put words to the code smells:
    The div does not have Tab focus by default. It is not recognized by the browser as an interactive element, even after giving it a button role. The role does not add behavior, only how it is presented to screen readers. We need to give it a tabindex. But even then, we can’t operate the button on Space or Return. We need to add that interactive behavior as well, likely using a JavaScript listener for a button press to fire a function. Did you know that the Space and Return keys do different things? Adrian Roselli explains it nicely, and it was a big TIL moment for me. Probably need different listeners to account for both interactions. And, of course, we need to account for a disabled state. All it takes is a single HTML attribute on a <button>, but a <div> probably needs yet another function that looks for some sort of data-attribute and then sets disabled on it. Oh, but hey, we can slap <div role=button> on there, right? It’s super tempting to go there, but all that does is expose the <div> as a button to assistive technology. It’s announced as a button, but does nothing to recreate the interactions needed for the complete user experience a <button> does. And no amount of styling will fix those semantics, either. We can make a <div> look like a button, but it’s not one despite its appearances.
    Anyway, that’s all I wanted to share. Using semantic elements where possible is one of those “best practice” statements we pick up along the way. I teach it to my students, but am guilty of relying on the high-level “it helps accessibility” reasoning that is just as generic as a <div>. Now I have specific talking points for explaining why that’s the case, as well as a “new-to-me” weapon in my DevTools arsenal to inspect and confirm those points.
    Thanks, Sara! This is merely the tip of the iceberg as far as what I’m learning (and will continue to learn) from the course.
    Explaining the Accessible Benefits of Using Semantic HTML Elements originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  24. by: Theena Kumaragurunathan
    Thu, 06 Nov 2025 10:56:15 GMT

    The internet of the early 2000s—what I once called the revelatory internet—felt like an endless library with doors left ajar. Much of that material circulated illegally, yes. I am not advocating a return to unchecked piracy. But the current licensing frameworks are failing both artists and audiences, and it’s worth asking why—and what a better model could look like.
    Hands up if you weren’t surprised to see streaming services plateauing or shedding subscribers.
    Prices are rising across Netflix, Spotify, and their peers, and more people are quietly returning to the oldest playbook of the internet: piracy. Is the golden age of streaming over?
    To answer that, I’ll step back.
    Sailing the High Seas Over the Years
    Internet piracy is as old as the modern internet. It began in scrappy bulletin boards and FTP servers where cracked software and MP3s slipped between hobbyists. When A&M Records v. Napster met at the Ninth Circuit, the court drew an early line in the sand: Napster was liable for contributory and vicarious infringement.
    That is when we learnt that convenience was not a defense.
    I was 18 when I went down a musical rabbit hole that I am still burrowing through today. Napster’s fall didn’t slow me or other curious music lovers. What started as single-track scavenging evolved into long, obsessive dives where I would torrent entire discographies of artists.
    Between roughly 2003 and 2011, the height of my period of music obsessiveness, I amassed over 500GB of music—eclectic, weird, and often unreleased in mainstream catalogs—that I would never have discovered without the internet. The collection doesn’t sound huge today, but it is meticulously curated and tagged. It includes artists who refuse to bend to the logic of Spotify or the market itself, rarities from little known underground heavy metal scenes in countries you would never associate with heavy metal, alongside music purchased directly from artists, all sans DRM.
    Then came a funny detour: in the first months of the pandemic, I made multiple backups of this library, bought an old ThinkPad, and set up a Plex server (I use Jellyfin as well now).
    That one decision nudged me into Linux, then Git, then Vim and Neovim, and finally into wonderful and wierd world of Emacs. You could argue that safeguarding those treasures opened the door to my FOSS worldview.
    The act of keeping what I loved pushed me toward tools I could control. It also made me view convenience with suspicion.
    The Golden Era of Streaming
    As broadband matured, piracy shifted from downloads to streams. Cyberlockers, link farms, IPTV boxes, and slick portals mimicked legitimate convenience. Europe watched closely. The EUIPO’s work shows a simple pattern: TV content leads piracy categories, streaming is the main access path, and live sports piracy surged after earlier declines.
    The lesson is simple.
    Technology opens doors.
    Law redraws boundaries.
    Economics decide which doors people choose.
    When lawful access is timely, comprehensive, and fairly priced, piracy ebbs. When it isn’t, the current finds its old channels.
    The Illusion of Ownership
    Here’s the pivot. Over the last decade I’ve “bought” movies, games, ebooks—only to have them vanish. I’ve watched albums grey out and films disappear from paid libraries. Ownership, in the mainstream digital economy, is legal fiction unless you control the files, formats, keys, and servers. Most of us don’t. We rent access dressed up as possession.
    The Rental Economy
    The dominant model today is licensing. You don’t buy a movie on a platform; you buy a license to stream or download within constraints the platform sets. Those constraints are enforced by DRM, device policies, region locks, and revocation rights buried in terms of service. If a platform loses rights, changes its catalog, or retires a title, your “purchase” becomes a broken link. The vocabulary is revealing: platforms call catalog changes “rotations,” not removals.
    This is not a moral judgment; it’s an operational one. Licensing aligns incentives with churn, not permanence. Companies optimize for monthly active users, not durable collections. If you are fine with rentals, this works. If you care about ownership, it fails.
    Two quick examples illustrate the point. First, music that is available today can be replaced tomorrow by a remaster that breaks playlists or metadata (not everyone likes remasters). Second, film libraries collapse overnight due to regional rights reshuffles or cost-cutting decisions.
    Both reveal a fundamental truth to this illusion of ownership: your access is contingent, not guaranteed. The interface encourages the illusion of permanence; the contract denies it.
    What Ownership Means in 2025
    Given that reality, what does it mean to own digital content now?
    Files: You keep the data itself, not pointers to it. If the internet vanished, you’d still have your collection. Open formats: Your files should be playable and readable across decades. Open or well-documented formats are your best bet. Keys: If encryption is involved, you control the keys. No external gatekeeper can revoke your access. Servers: You decide where the content lives and how it’s served—local storage, NAS, or self-hosted services—so policy changes elsewhere don’t erase your library. Ownership, in 2025, is the alignment of all four. If you lose any one pillar, you re-enter the rental economy. Files without open formats risk obsolescence. Open formats without keys are moot if DRM blocks you. Keys without servers mean you’re still dependent on someone else’s uptime. Servers without backups are bravado that ends in loss.
    Self-Hosting as Resistance
    Self-hosting is the pragmatic response to the rental economy—not just for sysadmins, but for anyone who wants to keep the things that matter. My pandemic Plex story is a case study. I copied and verified my music library. I set up an old ThinkPad as a lightweight server. I learned enough Linux to secure and manage it, then layered in Git for configuration, Vim and Neovim for editing, and eventually Emacs for writing and project management. The journey wasn’t about becoming a developer; it was about refusing impermanence as the default.
    A minimal self-hosting stack looks like this:
    Library: Organize, tag, and normalize files. Consistent metadata is half the battle. Storage: Redundant local storage (RAID or mirrored drives) plus offsite backups. Assume failure; plan for recovery. Indexing: A service (Plex, Jellyfin, or similar) that scans and serves your library. Keep your index portable. Access: Local-first, with optional secure remote access. Your default should be offline resilience, not cloud dependency. Maintenance: Occasional updates, integrity checks, and rehearsed restore steps. If you can redeploy in an afternoon, you own it. Self-hosting doesn’t require perfection. It asks for intent and a few steady habits. You don’t need new hardware; you need a small tolerance for learning and the patience to patch.
    A Pragmatic Model
    Not everything needs to be owned. The point is to decide deliberately what you keep and what you rent. A tiered model helps:
    Local-first files: Irreplaceable work, personal archives, and media you care about—stored locally with backups. Think original recordings, purchased DRM-free releases, research materials, and family photos. Sync-first files: Active documents that benefit from multi-device access—synced across trusted services but maintained in open formats with local copies. If sync breaks, you still have a working file. Self-hosted services: Media servers, note systems, photo galleries, and small web tools that you want available on your terms. Prioritize services with export paths and minimal complexity. Cloud rentals: Ephemeral consumption—new releases, casual viewing, niche apps. Treat these as screenings, not acquisitions. Enjoy them and let them go. To choose, ask three questions:
    Is it mission-critical or meaningful beyond a season? Can I store it in an open format without legal encumbrances? Will I regret losing it? If the answers skew yes, pull it into local-first or self-hosted. If not, rent with clear eyes.
    Costs and Trade-Offs
    The price of ownership is maintenance. Time to learn basics, time to patch, time to back up. There is risk—drives fail, indexes corrupt, formats change. But with small routines, the costs are manageable, and the upside is real: continuity.
    The trade-offs can be framed simply:
    Time: A few hours to set up; a few minutes a month to check. Money: Modest hardware (used laptop, external drives) and, optionally, a NAS. The cost amortizes over years. Complexity: Start with one service. Document your steps. Prefer boring tools. Boring is dependable. Risk: Reduce with redundancy and rehearsed restores. Test a recovery once a year. The payoff is permanence. You own what you can keep offline. You control what you can serve on your own terms. You protect the work and the art that shaped you.
    Self-Hosting, in old and new ways ©Theena Kumaragurunathan, 2025Bringing the Arc Together
    History matters because it explains behavior over time. When lawful access is timely, comprehensive, and fairly priced, piracy ebbs. When it isn’t, the current returns to old channels. The platforms call this leakage. I call it correction. People seek what isn’t offered—availability, completeness, fairness—and they will keep seeking until those needs are met.
    My own path tracks that arc. I learned to listen curiously in the torrent years, built a personal library, then chose to keep it. The choice pushed me toward free and open-source software, not as ideology but as practice: the practice of retaining what matters. If streaming’s golden age is ending, it is only because its economics revealed themselves. Rentals masquerading as purchases do not create trust; they teach caution.
    What Next
    A better way respects both artists and audiences. It looks like more direct purchase channels without DRM, fair global pricing, and clear catalog guarantees. It looks like platforms that treat permanence as a feature, not a bug. It looks like individuals who decide, calmly, what to keep and what to rent.
    You don’t own what you can’t keep offline. You only rent the right to forget. Owning is choosing to remember—files, formats, keys, servers—held together by the patience to maintain them.
  25. by: Abhishek Prakash
    Thu, 06 Nov 2025 07:12:58 GMT

    I recently upgraded to Fedora 43 and one thing I noticed was that image thumbnails were not showing up in the Nautilus files manager. Not just the recent file formats like webp or AVIF, it was not even showing up for classic image file formats like png and jpeg.
    Image thumbnails not showing upAs you can see in the screenshot above, thumbnails for video files were displayed properly. Even PDF and EPUB files displayed thumbnails.
    Actually, the behvaior was weirdly inconsistent, as it did show thumbnails for some of the older images, and I am these thumbnails were there before I upgraded to Fedora 43 from version 42.
    Thumbnails displayed for some images but not for all🔑 The one line solution: I fixed the issue to display image previews in the file explorer again with one line of command:
    sudo dnf install glycin-thumbnailerIf you are facing the same issue in Fedora, you can try that and get on with your life. But if you are curious, read on why the issue occurred in the first place and how the command above fixed it. Knowing these little things add to your knowledge and help you improve as a Linux user.
    The mystery of the missing thumbnails
    I looked for clues in Fedora forum, the obvious hunting ground for such issues. There were advices to clear the thumbnail cache and restart the Nautilus. My gray cells were hinting that that it was a futile exercise, and it indeed was. It changed nothing.
    Cleaning the thumbnail cache resulted into losing all image preview. This gave me a hint that something did change between Fedora 42 and Fedora 43, as the images from the Fedora 42 time were displaying thumbnails earlier.
    No thumbnailer for images
    I checked the thumbnailer to see what kind of thumbnailers were in use on my system:
    ls /usr/share/thumbnailers/And it showed me six thumbnailers and none of them were meant to work with images.
    Various thumbnailers present on my system, none for imagesEvince is for documents, gnome-epub for EPUB files, totem for video files, and few more for fonts, .mobi files and office files.
    Most distributions use the pixbuf library for image files and clearly, there were no thumbnailer from gdk-pixbuf2 in my system.
    abhishek@fedora:~$ ls /usr/share/thumbnailers/ evince.thumbnailer gnome-font-viewer.thumbnailer gsf-office.thumbnailer gnome-epub-thumbnailer.thumbnailer gnome-mobi-thumbnailer.thumbnailer totem.thumbnailer I found it weird because I checked and saw that was properly installed and yet there were no thumbnailers installed from it.
    I did reinstall gdk-pixbuf2:
    sudo dnf reinstall gdk-pixbuf2But even then, it didn't install the thumbnailer:
    abhishek@fedora:~$ dnf list --installed | grep -i thumbnailer evince-thumbnailer.x86_64 48.1-1.fc43 <unknown> gnome-epub-thumbnailer.x86_64 1.8-3.fc43 <unknown> totem-video-thumbnailer.x86_64 1:43.2-6.fc43 <unknown> I was tempted to explicitly install gdk-pixbuf2-thumbnailer but then I thought to investigate further on why it was gone missing in the first place. Thankfully, this investigation yielded the correct result.
    Fedora 43 switched to new image loader
    I came across this discussion that hinted that Fedora is now moving towards glycin, a Rust-based, sandboxed, and extendable image loading framework.
    Interesting but when I checked the installed DNF packages, it showed me a few glycin packages but no thumbnailers.
    dnf list --installed | grep -i glycin glycin-libs.i686 2.0.4-1.fc43 <unknown> glycin-libs.x86_64 2.0.4-1.fc43 <unknown> glycin-loaders.i686 2.0.4-1.fc43 <unknown> glycin-loaders.x86_64 2.0.4-1.fc43 <unknown>And thus I decided to install glycin-thumbnailer:
    sudo dnf install glycin-thumbnailerAnd this move solved the case of missing image previews. Closed the file manager and opened it again, and voila! All the thumbnails came back to life, even for WebP and AVIF files.
    Image thumbnails now properly displayedPersonally, I feel that glycin is a bit slow in generating thumbnails. I hope I am wrong about that.
    📋If you want to display thumbnails for RAW image files, you need to install libopenraw first.I hope this case file helps you investigate and solve the mystery of missing image previews on your system as well. The solution is a single command, a missing package, but how I arrived at that conclusion is the real fun, just like reading an Agatha Christie novel 🕵️

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.