Jump to content

Blogger

Blog Bot
  • Joined

  • Last visited

    Never
  1. by: Abhishek Kumar Sun, 20 Apr 2025 14:46:21 GMT Large Language Models (LLMs) are powerful, but they have one major limitation: they rely solely on the knowledge they were trained on. This means they lack real-time, domain-specific updates unless retrained, an expensive and impractical process. This is where Retrieval-Augmented Generation (RAG) comes in. RAG allows an LLM to retrieve relevant external knowledge before generating a response, effectively giving it access to fresh, contextual, and specific information. Imagine having an AI assistant that not only remembers general facts but can also refer to your PDFs, notes, or private data for more precise responses. This article takes a deep dive into how RAG works, how LLMs are trained, and how we can use Ollama and Langchain to implement a local RAG system that fine-tunes an LLM’s responses by embedding and retrieving external knowledge dynamically. By the end of this tutorial, we’ll build a PDF-based RAG project that allows users to upload documents and ask questions, with the model responding based on stored data. ✋I’m not an AI expert. This article is a hands-on look at Retrieval Augmented Generation (RAG) with Ollama and Langchain, meant for learning and experimentation. There might be mistakes, and if you spot something off or have better insights, feel free to share. It’s nowhere near the scale of how enterprises handle RAG, where they use massive datasets, specialized databases, and high-performance GPUs.What is Retrieval-Augmented Generation (RAG)?RAG is an AI framework that improves LLM responses by integrating real-time information retrieval. Instead of relying only on its training data, the LLM retrieves relevant documents from an external source (such as a vector database) before generating an answer. How RAG worksQuery Input – The user submits a question.Document Retrieval – A search algorithm fetches relevant text chunks from a vector store.Contextual Response Generation – The retrieved text is fed into the LLM, guiding it to produce a more accurate and relevant answer.Final Output – The response, now grounded in the retrieved knowledge, is returned to the user.Why use RAG instead of fine-tuning?No retraining required – Traditional fine-tuning demands a lot of GPU power and labeled datasets. RAG eliminates this need by retrieving data dynamically.Up-to-date knowledge – The model can refer to newly uploaded documents instead of relying on outdated training data.More accurate and domain-specific answers – Ideal for legal, medical, or research-related tasks where accuracy is crucial.How LLMs are trained (and why RAG improves them)Before diving into RAG, let’s understand how LLMs are trained: Pre-training – The model learns language patterns, facts, and reasoning from vast amounts of text (e.g., books, Wikipedia).Fine-tuning – It is further trained on specialized datasets for specific use cases (e.g., medical research, coding assistance).Inference – The trained model is deployed to answer user queries.While fine-tuning is helpful, it has limitations: It is computationally expensive.It does not allow dynamic updates to knowledge.It may introduce biases if trained on limited datasets.With RAG, we bypass these issues by allowing real-time retrieval from external sources, making LLMs far more adaptable. Building a local RAG application with Ollama and LangchainIn this tutorial, we'll build a simple RAG-powered document retrieval app using LangChain, ChromaDB, and Ollama. The app lets users upload PDFs, embed them in a vector database, and query for relevant information. 💡All the code is available in our GitHub repository. You can clone it and start testing right away.Installing dependenciesTo avoid messing up our system packages, we’ll first create a Python virtual environment. This keeps our dependencies isolated and prevents conflicts with system-wide Python packages. Navigate to your project directory and create a virtual environment: cd ~/RAG-Tutorial python3 -m venv venvNow, activate the virtual environment: source venv/bin/activateOnce activated, your terminal prompt should change to indicate that you are now inside the virtual environment. With the virtual environment activated, install the necessary Python packages using requirements.txt: pip install -r requirements.txtThis will install all the required dependencies for our RAG pipeline, including Flask, LangChain, Ollama, and Pydantic. Once installed, you’re all set to proceed with the next steps! Project structureOur project is structured as follows: RAG-Tutorial/ │── app.py # Main Flask server │── embed.py # Handles document embedding │── query.py # Handles querying the vector database │── get_vector_db.py # Manages ChromaDB instance │── .env # Stores environment variables │── requirements.txt # List of dependencies └── _temp/ # Temporary storage for uploaded filesStep 1: Creating app.py (Flask API Server)This script sets up a Flask server with two endpoints: /embed – Uploads a PDF and stores its embeddings in ChromaDB./query – Accepts a user query and retrieves relevant text chunks from ChromaDB.route_embed(): Saves an uploaded file and embeds its contents in ChromaDB.route_query(): Accepts a query and retrieves relevant document chunks.import os from dotenv import load_dotenv from flask import Flask, request, jsonify from embed import embed from query import query from get_vector_db import get_vector_db load_dotenv() TEMP_FOLDER = os.getenv('TEMP_FOLDER', './_temp') os.makedirs(TEMP_FOLDER, exist_ok=True) app = Flask(__name__) @app.route('/embed', methods=['POST']) def route_embed(): if 'file' not in request.files: return jsonify({"error": "No file part"}), 400 file = request.files['file'] if file.filename == '': return jsonify({"error": "No selected file"}), 400 embedded = embed(file) return jsonify({"message": "File embedded successfully"}) if embedded else jsonify({"error": "Embedding failed"}), 400 @app.route('/query', methods=['POST']) def route_query(): data = request.get_json() response = query(data.get('query')) return jsonify({"message": response}) if response else jsonify({"error": "Query failed"}), 400 if __name__ == '__main__': app.run(host="0.0.0.0", port=8080, debug=True)Step 2: Creating embed.py (embedding documents)This file handles document processing, extracts text, and stores vector embeddings in ChromaDB. allowed_file(): Ensures only PDFs are processed.save_file(): Saves the uploaded file temporarily.load_and_split_data(): Uses UnstructuredPDFLoader and RecursiveCharacterTextSplitter to extract text and split it into manageable chunks.embed(): Converts text chunks into vector embeddings and stores them in ChromaDB.import os from datetime import datetime from werkzeug.utils import secure_filename from langchain_community.document_loaders import UnstructuredPDFLoader from langchain_text_splitters import RecursiveCharacterTextSplitter from get_vector_db import get_vector_db TEMP_FOLDER = os.getenv('TEMP_FOLDER', './_temp') def allowed_file(filename): return filename.lower().endswith('.pdf') def save_file(file): filename = f"{datetime.now().timestamp()}_{secure_filename(file.filename)}" file_path = os.path.join(TEMP_FOLDER, filename) file.save(file_path) return file_path def load_and_split_data(file_path): loader = UnstructuredPDFLoader(file_path=file_path) data = loader.load() text_splitter = RecursiveCharacterTextSplitter(chunk_size=7500, chunk_overlap=100) return text_splitter.split_documents(data) def embed(file): if file and allowed_file(file.filename): file_path = save_file(file) chunks = load_and_split_data(file_path) db = get_vector_db() db.add_documents(chunks) db.persist() os.remove(file_path) return True return FalseStep 3: Creating query.py (Query processing)It retrieves relevant information from ChromaDB and uses an LLM to generate responses. get_prompt(): Creates a structured prompt for multi-query retrieval.query(): Uses Ollama's LLM to rephrase the user query, retrieve relevant document chunks, and generate a response.import os from langchain_community.chat_models import ChatOllama from langchain.prompts import ChatPromptTemplate, PromptTemplate from langchain_core.output_parsers import StrOutputParser from langchain_core.runnables import RunnablePassthrough from langchain.retrievers.multi_query import MultiQueryRetriever from get_vector_db import get_vector_db LLM_MODEL = os.getenv('LLM_MODEL') OLLAMA_HOST = os.getenv('OLLAMA_HOST', 'http://localhost:11434') def get_prompt(): QUERY_PROMPT = PromptTemplate( input_variables=["question"], template="""You are an AI assistant. Generate five reworded versions of the user question to improve document retrieval. Original question: {question}""", ) template = "Answer the question based ONLY on this context:\n{context}\nQuestion: {question}" prompt = ChatPromptTemplate.from_template(template) return QUERY_PROMPT, prompt def query(input): if input: llm = ChatOllama(model=LLM_MODEL) db = get_vector_db() QUERY_PROMPT, prompt = get_prompt() retriever = MultiQueryRetriever.from_llm(db.as_retriever(), llm, prompt=QUERY_PROMPT) chain = ({"context": retriever, "question": RunnablePassthrough()} | prompt | llm | StrOutputParser()) return chain.invoke(input) return NoneStep 4: Creating get_vector_db.py (Vector database management)It initializes and manages ChromaDB, which stores text embeddings for fast retrieval. get_vector_db(): Initializes ChromaDB with the Nomic embedding model and loads stored document vectors.import os from langchain_community.embeddings import OllamaEmbeddings from langchain_community.vectorstores.chroma import Chroma CHROMA_PATH = os.getenv('CHROMA_PATH', 'chroma') COLLECTION_NAME = os.getenv('COLLECTION_NAME') TEXT_EMBEDDING_MODEL = os.getenv('TEXT_EMBEDDING_MODEL') OLLAMA_HOST = os.getenv('OLLAMA_HOST', 'http://localhost:11434') def get_vector_db(): embedding = OllamaEmbeddings(model=TEXT_EMBEDDING_MODEL, show_progress=True) return Chroma(collection_name=COLLECTION_NAME, persist_directory=CHROMA_PATH, embedding_function=embedding)Step 5: Environment variablesCreate .env, to store environment variables: TEMP_FOLDER = './_temp' CHROMA_PATH = 'chroma' COLLECTION_NAME = 'rag-tutorial' LLM_MODEL = 'smollm:360m' TEXT_EMBEDDING_MODEL = 'nomic-embed-text' TEMP_FOLDER: Stores uploaded PDFs temporarily.CHROMA_PATH: Defines the storage location for ChromaDB.COLLECTION_NAME: Sets the ChromaDB collection name.LLM_MODEL: Specifies the LLM model used for querying.TEXT_EMBEDDING_MODEL: Defines the embedding model for vector storage.I'm using these light weight LLMs for this tutorial, as I don't have dedicated GPU to inference large models. | You can edit your LLMs in the .env fileTesting the makeshift RAG + LLM PipelineNow that our RAG app is set up, we need to validate its effectiveness. The goal is to ensure that the system correctly: Embeds documents – Converts text into vector embeddings and stores them in ChromaDB.Retrieves relevant chunks – Fetches the most relevant text snippets from ChromaDB based on a query.Generates meaningful responses – Uses Ollama to construct an intelligent response based on retrieved data.This testing phase ensures that our makeshift RAG pipeline is functioning as expected and can be fine-tuned if necessary. Running the flask serverWe first need to make sure our Flask app is running. Open a terminal, navigate to your project directory, and activate your virtual environment: cd ~/RAG-Tutorial source venv/bin/activate # On Linux/macOS # or venv\Scripts\activate # On Windows (if using venv) Now, run the Flask app: python3 app.pyIf everything is set up correctly, the server should start and listen on http://localhost:8080. You should see output like: Once the server is running, we'll use curl commands to interact with our pipeline and analyze the responses to confirm everything works as expected. 1. Testing Document EmbeddingThe first step is to upload a document and ensure its contents are successfully embedded into ChromaDB. curl --request POST \ --url http://localhost:8080/embed \ --header 'Content-Type: multipart/form-data' \ --form file=@/path/to/file.pdfBreakdown: curl --request POST → Sends a POST request to our API.--url http://localhost:8080/embed → Targets our embed endpoint running on port 8080.--header 'Content-Type: multipart/form-data' → Specifies that we are uploading a file.--form file=@/path/to/file.pdf → Attaches a file (in this case, a PDF) to be processed.Expected Response: What’s Happening Internally?The server reads the uploaded PDF file.The text is extracted, split into chunks, and converted into vector embeddings.These embeddings are stored in ChromaDB for future retrieval.If Something Goes Wrong: IssuePossible CauseFix"status": "error"File not found or unreadableCheck the file path and permissionscollection.count() == 0ChromaDB storage failureRestart ChromaDB and check logs 2. Querying the DocumentNow that our document is embedded, we can test whether relevant information is retrieved when we ask a question. curl --request POST \ --url http://localhost:8080/query \ --header 'Content-Type: application/json' \ --data '{ "query": "Question about the PDF?" }'Breakdown: curl --request POST → Sends a POST request.--url http://localhost:8080/query → Targets our query endpoint.--header 'Content-Type: application/json' → Specifies that we are sending JSON data.--data '{ "query": "Question about the PDF?" }' → Sends our search query to retrieve relevant information.Expected Response: What’s Happening Internally?The query "Whats in this file?" is passed to ChromaDB to retrieve the most relevant chunks.The retrieved chunks are passed to Ollama as context for generating a response.Ollama formulates a meaningful reply based on the retrieved information.If the Response is Not Good Enough: IssuePossible CauseFixRetrieved chunks are irrelevantPoor chunking strategyAdjust chunk sizes and retry embedding"llm_response": "I don't know"Context wasn't passed properlyCheck if ChromaDB is returning resultsResponse lacks document detailsLLM needs better instructionsModify the system prompt 3. Fine-tuning the LLM for better responsesIf Ollama’s responses aren’t detailed enough, we need to refine how we provide context. Tuning strategies:Improve Chunking – Ensure text chunks are large enough to retain meaning but small enough for effective retrieval.Enhance Retrieval – Increase n_results to fetch more relevant document chunks.Modify the LLM Prompt – Add structured instructions for better responses.Example system prompt for Ollama:prompt = f""" You are an AI assistant helping users retrieve information from documents. Use the following document snippets to provide a helpful answer. If the answer isn't in the retrieved text, say 'I don't know.' Retrieved context: {retrieved_chunks} User's question: {query_text} """ This ensures that Ollama: Uses retrieved text properly.Avoids hallucinations by sticking to available context.Provides meaningful, structured answers.Final thoughtsBuilding this makeshift RAG LLM tuning pipeline has been an insightful experience, but I want to be clear, I’m not an AI expert. Everything here is something I’m still learning myself. There are bound to be mistakes, inefficiencies, and things that could be improved. If you’re someone who knows better or if I’ve missed any crucial points, please feel free to share your insights. That said, this project gave me a small glimpse into how RAG works. At its core, RAG is about fetching the right context before asking an LLM to generate a response. It’s what makes AI chatbots capable of retrieving information from vast datasets instead of just responding based on their training data. Large companies use this technique at scale, processing massive amounts of data, fine-tuning their models, and optimizing their retrieval mechanisms to build AI assistants that feel intuitive and knowledgeable. What we built here is nowhere near that level, but it was still fascinating to see how we can direct an LLM’s responses by controlling what information it retrieves. Even with this basic setup, we saw how much impact retrieval quality, chunking strategies, and prompt design have on the final response. This makes me wonder, have you ever thought about training your own LLM? Would you be interested in something like this but fine-tuned specifically for Linux tutorials? Imagine a custom-tuned LLM that could answer your Linux questions with accurate, RAG-powered responses, would you use it? Let us know in the comments!
  2. by: Ojekudo Oghenemaro Emmanuel Sun, 20 Apr 2025 08:04:07 GMT Introduction In today’s digital world, security is paramount, especially when dealing with sensitive data like user authentication and financial transactions. One of the most effective ways to enhance security is by implementing One-Time Password (OTP) authentication. This article explores how to implement OTP authentication in a Laravel backend with a Vue.js frontend, ensuring secure transactions. Why Use OTP Authentication? OTP authentication provides an extra layer of security beyond traditional username and password authentication. Some key benefits include: Prevention of Unauthorized Access: Even if login credentials are compromised, an attacker cannot log in without the OTP. Enhanced Security for Transactions: OTPs can be used to confirm high-value transactions, preventing fraud. Temporary Validity: Since OTPs expire after a short period, they reduce the risk of reuse by attackers. Prerequisites Before getting started, ensure you have the following: Laravel 8 or later installed Vue.js configured in your project A mail or SMS service provider for sending OTPs (e.g., Twilio, Mailtrap) Basic understanding of Laravel and Vue.js In this guide, we’ll implement OTP authentication in a Laravel (backend) and Vue.js (frontend) application. We’ll cover: Setting up Laravel and Vue (frontend) from scratch Setting up OTP generation and validation in Laravel Creating a Vue.js component for OTP input Integrating OTP authentication into login workflows Enhancing security with best practices By the end, you’ll have a fully functional OTP authentication system ready to enhance the security of your fintech or web application. Setting Up Laravel for OTP Authentication Step 1: Install Laravel and Required Packages If you haven't already set up a Laravel project, create a new one: composer create-project "laravel/laravel:^10.0" example-app Next, install the Laravel Breeze package for frontend scaffolding: composer require laravel/breeze --dev After composer has finished installing, run the following command to select the framework you want to use—the Vue configuration: php artisan breeze:install You’ll see a prompt with the available stacks: Which Breeze stack would you like to install? - Vue with Inertia Would you like any optional features? - None Which testing framework do you prefer? - PHPUnit Breeze will automatically install the necessary packages for your Laravel Vue project. You should see: INFO Breeze scaffolding installed successfully. Now run the npm command to build your frontend assets: npm run dev Then, open another terminal and launch your Laravel app: php artisan serve Step 2: Setting up OTP generation and validation in Laravel We'll use a mail testing platform called Mailtrap to send and receive mail locally. If you don’t have a mail testing service set up, sign up at Mailtrap to get your SMTP credentials and add them to your .env file: MAIL_MAILER=smtp MAIL_HOST=sandbox.smtp.mailtrap.io MAIL_PORT=2525 MAIL_USERNAME=1780944422200a MAIL_PASSWORD=a8250ee453323b MAIL_ENCRYPTION=tls MAIL_FROM_ADDRESS=hello@example.com MAIL_FROM_NAME="${APP_NAME}" To send OTPs to users, we’ll use Laravel’s built-in mail services. Create a mail class and controller: php artisan make:mail OtpMail php artisan make:controller OtpController Then modify the OtpMail class: <?php namespace App\Mail; use Illuminate\Bus\Queueable; use Illuminate\Contracts\Queue\ShouldQueue; use Illuminate\Mail\Mailable; use Illuminate\Mail\Mailables\Content; use Illuminate\Mail\Mailables\Envelope; use Illuminate\Queue\SerializesModels; class OtpMail extends Mailable { use Queueable, SerializesModels; public $otp; /** * Create a new message instance. */ public function __construct($otp) { $this->otp = $otp; } /** * Build the email message. */ public function build() { return $this->subject('Your OTP Code') ->view('emails.otp') ->with(['otp' => $this->otp]); } /** * Get the message envelope. */ public function envelope(): Envelope { return new Envelope( subject: 'OTP Mail', ); } } Create a Blade view in resources/views/emails/otp.blade.php: <!DOCTYPE html> <html> <head> <title>Your OTP Code</title> </head> <body> <p>Hello,</p> <p>Your One-Time Password (OTP) is: <strong>{{ $otp }}</strong></p> <p>This code is valid for 10 minutes. Do not share it with anyone.</p> <p>Thank you!</p> </body> </html> Step 3: Creating a Vue.js component for OTP input Normally, after login or registration, users are redirected to the dashboard. In this tutorial, we add an extra security step that validates users with an OTP before granting dashboard access. Create two Vue files: Request.vue: requests the OTP Verify.vue: inputs the OTP for verification Now we create the routes for the purpose of return the View and the functionality of creating OTP codes, storing OTP codes, sending OTP codes through the mail class, we head to our web.php file: Route::middleware('auth')->group(function () { Route::get('/request', [OtpController::class, 'create'])->name('request'); Route::post('/store-request', [OtpController::class, 'store'])->name('send.otp.request'); Route::get('/verify', [OtpController::class, 'verify'])->name('verify'); Route::post('/verify-request', [OtpController::class, 'verify_request'])->name('verify.otp.request'); }); Putting all of this code in the OTP controller returns the View for our request.vue and verify.vue file and the functionality of creating OTP codes, storing OTP codes, sending OTP codes through the mail class and verifying OTP codes, we head to our web.php file to set up the routes. public function create(Request $request) { return Inertia::render('Request', [ 'email' => $request->query('email', ''), ]); } public function store(Request $request) { $request->validate([ 'email' => 'required|email|exists:users,email', ]); $otp = rand(100000, 999999); Cache::put('otp_' . $request->email, $otp, now()->addMinutes(10)); Log::info("OTP generated for " . $request->email . ": " . $otp); Mail::to($request->email)->send(new OtpMail($otp)); return redirect()->route('verify', ['email' => $request->email]); } public function verify(Request $request) { return Inertia::render('Verify', [ 'email' => $request->query('email'), ]); } public function verify_request(Request $request) { $request->validate([ 'email' => 'required|email|exists:users,email', 'otp' => 'required|digits:6', ]); $cachedOtp = Cache::get('otp_' . $request->email); Log::info("OTP entered: " . $request->otp); Log::info("OTP stored in cache: " . ($cachedOtp ?? 'No OTP found')); if (!$cachedOtp) { return back()->withErrors(['otp' => 'OTP has expired. Please request a new one.']); } if ((string) $cachedOtp !== (string) $request->otp) { return back()->withErrors(['otp' => 'Invalid OTP. Please try again.']); } Cache::forget('otp_' . $request->email); $user = User::where('email', $request->email)->first(); if ($user) { $user->email_verified_at = now(); $user->save(); } return redirect()->route('dashboard')->with('success', 'OTP Verified Successfully!'); } Having set all this code, we return to the request.vue file to set it up. <script setup> import AuthenticatedLayout from '@/Layouts/AuthenticatedLayout.vue'; import InputError from '@/Components/InputError.vue'; import InputLabel from '@/Components/InputLabel.vue'; import PrimaryButton from '@/Components/PrimaryButton.vue'; import TextInput from '@/Components/TextInput.vue'; import { Head, useForm } from '@inertiajs/vue3'; const props = defineProps({ email: { type: String, required: true, }, }); const form = useForm({ email: props.email, }); const submit = () => { form.post(route('send.otp.request'), { onSuccess: () => { alert("OTP has been sent to your email!"); form.get(route('verify'), { email: form.email }); // Redirecting to OTP verification }, }); }; </script> <template> <Head title="Request OTP" /> <AuthenticatedLayout> <form @submit.prevent="submit"> <div> <InputLabel for="email" value="Email" /> <TextInput id="email" type="email" class="mt-1 block w-full" v-model="form.email" required autofocus /> <InputError class="mt-2" :message="form.errors.email" /> </div> <div class="mt-4 flex items-center justify-end"> <PrimaryButton :class="{ 'opacity-25': form.processing }" :disabled="form.processing"> Request OTP </PrimaryButton> </div> </form> </AuthenticatedLayout> </template> Having set all this code, we return to the verify.vue to set it up: <script setup> import AuthenticatedLayout from '@/Layouts/AuthenticatedLayout.vue'; import InputError from '@/Components/InputError.vue'; import InputLabel from '@/Components/InputLabel.vue'; import PrimaryButton from '@/Components/PrimaryButton.vue'; import TextInput from '@/Components/TextInput.vue'; import { Head, useForm, usePage } from '@inertiajs/vue3'; const page = usePage(); // Get the email from the URL query params const email = page.props.email || ''; // Initialize form with email and OTP field const form = useForm({ email: email, otp: '', }); // Submit function const submit = () => { form.post(route('verify.otp.request'), { onSuccess: () => { alert("OTP verified successfully! Redirecting..."); window.location.href = '/dashboard'; // Change to your desired redirect page }, onError: () => { alert("Invalid OTP. Please try again."); }, }); }; </script> <template> <Head title="Verify OTP" /> <AuthenticatedLayout> <form @submit.prevent="submit"> <div> <InputLabel for="otp" value="Enter OTP" /> <TextInput id="otp" type="text" class="mt-1 block w-full" v-model="form.otp" required /> <InputError class="mt-2" :message="form.errors.otp" /> </div> <div class="mt-4 flex items-center justify-end"> <PrimaryButton :disabled="form.processing"> Verify OTP </PrimaryButton> </div> </form> </AuthenticatedLayout> </template> Step 4: Integrating OTP authentication into login and register workflows Update the login controller: public function store(LoginRequest $request): RedirectResponse { $request->authenticate(); $request->session()->regenerate(); return redirect()->intended(route('request', absolute: false)); } Update the registration controller: public function store(Request $request): RedirectResponse { $request->validate([ 'name' => 'required|string|max:255', 'email' => 'required|string|lowercase|email|max:255|unique:' . User::class, 'password' => ['required', 'confirmed', Rules\Password::defaults()], ]); $user = User::create([ 'name' => $request->name, 'email' => $request->email, 'password' => Hash::make($request->password), ]); event(new Registered($user)); Auth::login($user); return redirect(route('request', absolute: false)); } Conclusion Implementing OTP authentication in Laravel and Vue.js enhances security for user logins and transactions. By generating, sending, and verifying OTPs, we can add an extra layer of protection against unauthorized access. This method is particularly useful for financial applications and sensitive user data.
  3. by: LHB Community Sun, 20 Apr 2025 12:23:45 +0530 As a developer, efficiency is key. Being a full-stack developer myself, I’ve always thought of replacing boring tasks with automation. What could happen if I just keep writing new code in a Python file, and it gets evaluated every time I save it? Isn’t that a productivity boost? 'Hot Reload' is that valuable feature of the modern development process that automatically reloads or refreshes the code after you make changes to a file. This helps the developers see the effect of their changes instantly and avoid manually restarting or refreshing the browser. Over these years, I’ve used tools like entr to keep docker containers on the sync every time I modify docker-compose.yml file or keep testing with different CSS designs on the fly with browser-sync.  1. entrentr (Event Notify Test Runner) is a lightweight command line tool for monitoring file changes and triggering specified commands. It’s one of my favorite tools to restart any CLI process, whether it be triggering a docker build or restarting a python script or keep rebuilding the C project. For developers who are used to the command line, entr provides a simple and efficient way to perform tasks such as building, testing, or restarting services in real time. Key Features Lightweight, no additional dependencies.Highly customizableIdeal for use in conjunction with scripts or build tools.Linux only.Installation All you have to do is type in the following command in the terminal: sudo apt install -y entrUsage Auto-trigger build tools: Use entr to automatically execute build commands like make, webpack, etc. Here's the command I use to do that: ls docker-compose.yml | entr -r docker buildHere, -r flag reloads the child process, which is the run command ‘docker build’. 0:00 /0:23 1× Automatically run tests: Automatically re-run unit tests or integration tests after modifying the code. ls *.ts | entr bun test2. nodemonnodemon is an essential tool for developers working on Node.js applications. It automatically monitors changes to project files and restarts the Node.js server when files are modified, eliminating the need for developers from restarting the server manually. Key Features Monitor file changes and restart Node.js server automatically.Supports JavaScript and TypeScript projectsCustomize which files and directories to monitor.Supports common web frameworks such as Express, Hapi.Installation You can type in a single command in the terminal to install the tool: npm install -g nodemonIf you are installing Node.js and npm for the first on Ubuntu-based distributions. You can follow our Node.js installation tutorial. Usage When you type in the following command, it starts server.js and will automatically restart the server if the file changes. nodemon server.js3. LiveReload.netLiveReload.net is a very popular tool, especially for front-end developers. It automatically refreshes the browser after you save a file, helping developers see the effect of changes immediately, eliminating the need to manually refresh the browser. Unlike others, it is a web–based tool, and you need to head to its official website to get started. Every file remains in your local network. No files are uploaded to a third-party server. Key Features Seamless integration with editorsSupports custom trigger conditions to refresh the pageGood compatibility with front-end frameworks and static websites.Usage It's stupidly simple. Just load up the website, and drag and drop your folder to start making live changes.  4. fswatchfswatch is a cross-platform file change monitoring tool for Linux, macOS, and developers using it on Windows via WSL (Windows Subsystem Linux). It is powerful enough to monitor multiple files and directories for changes and perform actions accordingly. Key Features Supports cross-platform operation and can be used on Linux and macOS.It can be used with custom scripts to trigger multiple operations.Flexible configuration options to filter specific types of file changes.Installation To install it on a Linux distribution, type in the following in the terminal: sudo apt install -y fswatchIf you have a macOS computer, you can use the command: brew install fswatchUsage You can try typing in the command here: fswatch -o . | xargs -n1 -I{} makeAnd, then you can chain this command with an entr command for a rich interactive development experience. ls hellomake | entr -r ./hellomakeThe “fswatch” command will invoke make to compile the c application, and then if our binary “hellomake” is modified, we’ll run it again. Isn’t this a time saver?  5. WatchexecWatchexec is a cross-platform command line tool for automating the execution of specified commands when a file or directory changes. It is a lightweight file monitor that helps developers automate tasks such as running tests, compiling code, or reloading services when a source code file changes.    Key Features Support cross-platform use (macOS, Linux, Windows).Fast, written in Rust.Lightweight, no complex configuration.Installation On Linux, just type in: sudo apt install watchexecAnd, if you want to try it on macOS (via homebrew): brew install watchexecYou can also download corresponding binaries for your system from the project’s Github releases section. Usage All you need to do is just run the command: watchexec -e py "pytest"This will run pytests every time a Python file in the current directory is modified. 6. BrowserSyncBrowserSync is a powerful tool that not only monitors file changes, but also synchronizes pages across multiple devices and browsers. BrowserSync can be ideal for developers who need to perform cross-device testing. Key features Cross-browser synchronization.Automatically refreshes multiple devices and browsers.Built-in local development server.Installation Considering you have Node.js installed first, type in the following command: npm i -g browser-syncOr, you can use: npx browser-syncUsage Here is how the commands for it would look like: browser-sync start --server --files "/*.css, *.js, *.html" npx browser-sync start --server --files "/*.css, *.js, *.html"You can use either of the two commands for your experiments. This command starts a local server and monitors the CSS, JS, and HTML files for changes, and the browser is automatically refreshed as soon as a change occurs. If you’re a developer and aren't using any modern frontend framework, this comes handy. 7. watchdog & watchmedoWatchdog is a file system monitoring library written in Python that allows you to monitor file and directory changes in real time. Whether it's file creation, modification, deletion, or file move, Watchdog can help you catch these events and trigger the appropriate action. Key Features Cross-platform supportProvides full flexibility with its Python-based APIIncludes watchmedo script to hook any CLI application easilyInstallation Install Python first, and then install with pip using the command below: pip install watchdogUsage Type in the following and watch it in action: watchmedo shell-command --patterns="*.py" --recursive --command="python factorial.py" .This command watches a directory for file changes and prints out the event details whenever a file is modified, created, or deleted. In the command, --patterns="*.py" watches .py files, --recursive watches subdirectories and --command="python factorial.py" run the python file. ConclusionHot reloading tools have become increasingly important in the development process, and they can help developers save a lot of time and effort and increase productivity. With tools like entr, nodemon, LiveReload, Watchexec, Browser Sync, and others, you can easily automate reloading and live feedback without having to manually restart the server or refresh the browser. Integrating these tools into your development process can drastically reduce repetitive work and waiting time, allowing you to focus on writing high-quality code. Whether you're developing a front-end application or a back-end service or managing a complex project, using these hot-reloading tools will enhance your productivity. Bhuwan Mishra is a Fullstack developer, with Python and Go as his tools of choice. He takes pride in building and securing web applications, APIs, and CI/CD pipelines, as well as tuning servers for optimal performance. He also has a passion for working with Kubernetes.
  4. by: Sreenath Sat, 19 Apr 2025 13:00:24 GMT Simply creating well-formatted notes isn’t enough to manage the information you collect in daily life—accessibility is key. If you can't easily retrieve that information and its context, the whole point of "knowledge management" falls apart. From my experience using it daily for several months, I’d say Logseq does a better job of interlinking notes than any other app I’ve tried. So, without further ado, let’s dive in. The concept of page, links, and tagsIf you’ve used Logseq before, you’ve likely noticed one key thing: everything is a block. Your data is structured as intentional, individual blocks. When you type a sentence and hit Enter, instead of just creating a new line, Logseq starts a new bullet point. This design brings both clarity and complexity. In Logseq, pages are made up of bullet-formatted text. Each page acts like a link—and when you search for a page that doesn’t exist, Logseq simply creates it for you. Here’s the core idea: pages and tags function in a very similar way. You can think of a tag as a special kind of page that collects links to all content marked with that tag. For a deeper dive into this concept, I recommend checking out this forum post. Logseq also supports block references, which let you link directly to any specific block—meaning you can reference a single sentence from one note in another. 📋Ultimately, it is the end-user's creativity that creates a perfect content organization. There is no one way of using Logseq for knowledge management. It's up to you how you use it.Creating a new page in LogseqClick on the top-left search icon. This will bring a search overlay. Here, enter the name of the page you want to create. If no such page is present, you will get an option to create a new page. Search for a noteFor example, I created a page called "My Logseq Notes" and you can see this newly created page in 'All pages' tab on Logseq sidebar. New page listed in "All Pages" tabLogseq stores all the created page in the pages directory inside the Logseq folder you have chosen on your system. The Logseq pages directory in File ManagerThere won't be any nested directories to store sub-pages. All those things will be done using links and tags. In fact, there is no point to look into the Logseq directory manually. Use the app interface, where the data will appear organized. ⌨️ Use keyboard shortcut for creating pagesPowerful tools like Logseq are better used with keyboard. You can create pages/links/references using only keyboard, without touching the mouse. The common syntax to create a page or link in Logseq is: #One-word-page-nameYou can press the # symbol and enter a one word name. If there are no pages with the name exists, a new page is created. Else, link to the mentioned page is added. If you need to create a page with multiple words, use: #[[Page with multiple words separated with space]]Place the name of the note within two [[]] symbol. 0:00 /0:32 1× Create pages with single word name or multi-word names. Using TagsIn the example above, I have created two pages, one without spaces in the name, while the other has spaces. Both of them can be considered as tags. Confused? The further interlinking of these pages actually defines if it's a page or a tag. If you are using it as a 'special page' to accumulate similar contents, then it can be considered as a tag. If you are filling paragraphs of text inside it, then it will be a regular page. Basically, a tag-page is also a page but it has the links to all the pages marked with the said tag. To add a tag to a particular note, you can type #<tag-name> anywhere in the note. For convenience and better organization, you can add at the end of the note. Adding Simple TagsLinking to a pageCreating a new page and adding a link to an existing page is the same process in Logseq. You have seen it above. If you press the [[]] and type a name, if that name already exists, a link to that page is created. Else, a new page is created. In the short video below, you can see the process of linking a note in another note. 0:00 /0:22 1× Adding link to a page in Logseq in another note. Desktop Linux is mostly neglected by the industry but loved by the community. For the past 12 years, It's FOSS has been helping people use Linux on their personal computers. And we are now facing the existential threat from AI models stealing our content. If you like what we do and would love to support our work, please become It's FOSS Plus member. It costs $24 a year (less than the cost of a burger meal each month) and you get an ad-free reading experience with the satisfaction of helping the desktop Linux community. Join It's FOSS Plus Referencing a blockThe main flexibility of Logseq lies in the linking of individual blocks. In each note, you have a parent node, then child nodes and grand-child nodes. These are distinguished by the indentation it has. So, in the case of block referencing, you should take utmost care in properly adding indent to the note blocks. Now, type ((. A search box will appear above the cursor. Start typing something, and it will highlight the matching block anywhere in Logseq. 0:00 /0:29 1× Referencing a block inside a note. The block we are adding is part of another note. Similarly, you can right-click on a node and select "Copy block ref" to copy the reference code for that block. Copy Block ReferenceNow, if you paste this on other note, the main node content is pasted and the rest of that block (intended contents) will be visible on hover. Hover over reference for preview💡Instead of the "Copy block ref", you can also choose "Copy block embed" and then paste the embed code. This will paste the whole block in the area where you pasted the embed code.Using the block referencing and Markdown linksOnce you have the block reference code, you can use it as a URL to link to a particular word, instead of pasting raw in a line. To do that, use the Markdown link syntax: [This is a link to the block](reference code of the block)For example: [This is a link to the block](((679b6c26-2ce9-48f2-be6a-491935b314a6)))So, when you hover over the text, the referenced content is previewed. Reference as Markdown HyperlinkNow that you have the basic building blocks, you can start organizing your notes into a proper knowledge base. In the next tutorial of this series, I'll discuss how you can use plugins and themes to customize Logseq.
  5. by: LHB Community Sat, 19 Apr 2025 15:59:35 +0530 As a Kubernetes engineer, I deal with kubectl almost every day. Pod status, service list, CrashLoopBackOff location, YAML configuration comparison, log view...... are almost daily operations! But to be honest, in the process of cutting namespaces, manually copying pod names, and scrolling the log again and again, I gradually felt burned out. That is, until I came across KubeTUI — a little tool that made me feel like “getting back on my feet”. What is KubeTUIKubeTUI, known as Kubernetes Terminal User Interface, is a Kubernetes visual dashboard that can be used in the terminal. It's not like the traditional kubectl, which lets you memorize and knock out commands, or the Kubernetes Dashboard, which requires a browser, Ingress, and a token to log in to a bunch of configurations. In a nutshell, it's a tool that lets you happily browse the state of your Kubernetes cluster from your terminal. Installing KubeTUIKubeTUI is written in Rust, and you can download its binary releases from Github. Once you do that, you need to set up a Kubernetes environment to build and monitor your application. Let me show you how that is done, with an example of building a WordPress application. Setting up the Kubernetes environmentWe’ll use K3s to spin up a Kubernetes environment. The steps are mentioned below. Step 1: Install k3s and runcurl -sfL https://get.k3s.io | sh -With this single command, k3s will start itself after installation. At later times, you can use the below command to start k3s server.  sudo k3s server --write-kubeconfig-mode='644'Here’s a quick explanation of what the command includes : k3s server: It starts the K3s server component, which is the core of the Kubernetes control plane.--write-kubeconfig-mode='644': It ensures that the generated kubeconfig file has permissions that allow the owner to read and write it, and the group and others to only read it. If you start the server without this flag, you need to use sudo for all k3s commands.Step 2: Check available nodes via kubectlWe need to verify if Kubernetes control plane is actually working before we can make any deployments. You can use the command below to check that: k3s kubectl get nodeStep 3: Deploy WordPress using Helm chart (Sample Application)K3s provides helm integration, which helps manage the Kubernetes application. Simply apply this YAML manifest to spin up WordPress in Kubernetes environment from Bitnami helm chart. Create a file named wordpress.yaml with the contents: Content MissingYou can then apply the configuration file to the application using the command: k3s kubectl apply -f wordpress.yamlIt will take around 2–3 minutes for the whole setup to complete. Step 4: Launch KubeTUITo KubeTUI, type in the following command in the terminal. kubetuiHere's what you will see. There are no pods in the default namespace. Let’s switch namespace to wpdev we created earlier by hitting “n”. How to Use KubeTuiTo navigate to different tabs, like switching screens from Pod to Config and Network, you can click with your mouse or press the corresponding number as shown: You can also switch tabs with the keyboard: If you need help with Kubetui at any time, press ? to see all the available options. It integrates a vim-like search mode. To activate search mode, enter /. Tip for Log filtering I discovered an interesting feature to filter logs from multiple Kubernetes resources. For example, say we want to target logs from all pods with names containing WordPress. It will combine logs from both of these pods. We can use the query: pod:wordpressYou can target different resource types like svc, jobs, deploy, statefulsets, replicasets with the log filtering in place. Instead of combining logs, if you want to remove some pods or container logs, you can achieve it with !pod:pod-to-exclude and !container:container-to-exclude filters. ConclusionWorking with Kubernetes involves switching between different namespaces, pods, networks, configs, and services. KubeTUI can be a valuable asset in managing and troubleshooting Kubernetes environment.  I find myself more productive using tools like KubeTUI. Share your thoughts on what tools you’re utilizing these days to make your Kubernetes journey smoother. Bhuwan Mishra is a Fullstack developer, with Python and Go as his tools of choice. He takes pride in building and securing web applications, APIs, and CI/CD pipelines, as well as tuning servers for optimal performance. He also has a passion for working with Kubernetes.
  6. by: Geoff Graham Fri, 18 Apr 2025 12:12:35 +0000 Hey, did you see the post Jen Simmons published about WebKit’s text-wrap: pretty implementation? It was added to Safari Technology Preview and can be tested now, as in, like, today. Slap this in a stylesheet and your paragraphs get a nice little makeover that improves the ragging, reduces hyphenation, eliminates typographic orphans at the end of the last line, and generally avoids large typographic rivers as a result. The first visual in the post tells the full story, showing how each of these is handled. Credit: WebKit Blog That’s a lot of heavy lifting for a single value! And according to Jen, this is vastly different from Chromium’s implementation of the exact same feature. Jen suggests that performance concerns are the reason for the difference. It does sound like the pretty value does a lot of work, and you might imagine that would have a cumulative effect when we’re talking about long-form content where we’re handling hundreds, if not thousands, of lines of text. If it sounds like Safari cares less about performance, that’s not the case, as their approach is capable of handling the load. Great, carry on! But now you know that two major browsers have competing implementations of the same feature. I’ve been unclear on the terminology of pretty since it was specced, and now it truly seems that what is considered “pretty” really is in the eye of the beholder. And if you’re hoping to choose a side, don’t, because the specification is intentionally unopinionated in this situation, as it says (emphasis added): So, there you have it. One new feature. Two different approaches. Enjoy your new typographic powers. 💪 “Pretty” is in the eye of the beholder originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  7. by: Zell Liew Thu, 17 Apr 2025 12:38:05 +0000 There was once upon a time when native CSS lacked many essential features, leaving developers to come up with all sorts of ways to make CSS easier to write over the years. These ways can mostly be categorized into two groups: Pre-processors Post-processors Pre-processors include tools like Sass, Less, and Stylus. Like what the category’s name suggests, these tools let you write CSS in their syntax before compiling your code into valid CSS. Post-processors work the other way — you write non-valid CSS syntax into a CSS file, then post-processors will change those values into valid CSS. There are two major post-processors today: PostCSS LightningCSS PostCSS is the largest kid on the block while Lightning CSS is a new and noteworthy one. We’ll talk about them both in a bit. I think post-processors have won the compiling game Post-processors have always been on the verge of winning since PostCSS has always been a necessary tool in the toolchain. The most obvious (and most useful) PostCSS plugin for a long time is Autoprefixer — it creates vendor prefixes for you so you don’t have to deal with them. /* Input */ .selector { transform: /* ... */; } .selector { -webkit-transform: /* ... */; transform: /* ... */; } Arguably, we don’t need Autoprefixer much today because browsers are more interopable, but nobody wants to go without Autoprefixer because it eliminates our worries about vendor prefixing. What has really tilted the balance towards post-processors includes: Native CSS gaining essential features Tailwind removing support for pre-processors Lightning CSS Let me expand on each of these. Native CSS gaining essential features CSS pre-processors existed in the first place because native CSS lacked features that were critical for most developers, including: CSS variables Nesting capabilities Allowing users to break CSS into multiple files without additional fetch requests Conditionals like if and for Mixins and functions Native CSS has progressed a lot over the years. It has gained great browser support for the first two features: CSS Variables Nesting With just these two features, I suspect a majority of CSS users won’t even need to fire up pre-processors or post-processors. What’s more, The if() function is coming to CSS in the future too. But, for the rest of us who needs to make maintenance and loading performance a priority, we still need the third feature — the ability to break CSS into multiple files. This can be done with Sass’s use feature or PostCSS’s import feature (provided by the postcss-import plugin). PostCSS also contains plugins that can help you create conditionals, mixins, and functions should you need them. Although, from my experience, mixins can be better replaced with Tailwind’s @apply feature. This brings us to Tailwind. Tailwind removing support for pre-processors Tailwind 4 has officially removed support for pre-processors. From Tailwind’s documentation: If you included Tailwind 4 via its most direct installation method, you won’t be able to use pre-processors with Tailwind. @import `tailwindcss` That’s because this one import statement makes Tailwind incompatible with Sass, Less, and Stylus. But, (fortunately), Sass lets you import CSS files if the imported file contains the .css extension. So, if you wish to use Tailwind with Sass, you can. But it’s just going to be a little bit wordier. @layer theme, base, components, utilities; @import "tailwindcss/theme.css" layer(theme); @import "tailwindcss/preflight.css" layer(base); @import "tailwindcss/utilities.css" layer(utilities); Personally, I dislike Tailwind’s preflight styles so I exclude them from my files. @layer theme, base, components, utilities; @import 'tailwindcss/theme.css' layer(theme); @import 'tailwindcss/utilities.css' layer(utilities); Either way, many people won’t know you can continue to use pre-processors with Tailwind. Because of this, I suspect pre-processors will get less popular as Tailwind gains more momentum. Now, beneath Tailwind is a CSS post-processor called Lightning CSS, so this brings us to talking about that. Lightning CSS Lightning CSS is a post-processor can do many things that a modern developer needs — so it replaces most of the PostCSS tool chain including: postcss-import postcss-preset-env autoprefixer Besides having a decent set of built-in features, it wins over PostCSS because it’s incredibly fast. Speed helps Lightning CSS win since many developers are speed junkies who don’t mind switching tools to achieve reduced compile times. But, Lightning CSS also wins because it has great distribution. It can be used directly as a Vite plugin (that many frameworks support). Ryan Trimble has a step-by-step article on setting it up with Vite if you need help. // vite.config.mjs export default { css: { transformer: 'lightningcss' }, build: { cssMinify: 'lightningcss' } }; If you need other PostCSS plugins, you can also include that as part of the PostCSS tool chain. // postcss.config.js // Import other plugins... import lightning from 'postcss-lightningcss' export default { plugins: [lightning, /* Other plugins */], } Many well-known developers have switched to Lightning CSS and didn’t look back. Chris Coyier says he’ll use a “super basic CSS processing setup” so you can be assured that you are probably not stepping in any toes if you wish to switch to Lightning, too. If you wanna ditch pre-processors today You’ll need to check the features you need. Native CSS is enough for you if you need: CSS Variables Nesting capabilities Lightning CSS is enough for you if you need: CSS Variables Nesting capabilities import statements to break CSS into multiple files Tailwind (with @apply) is enough for you if you need: all of the above Mixins If you still need conditionals like if, for and other functions, it’s still best to stick with Sass for now. (I’ve tried and encountered interoperability issues between postcss-for and Lightning CSS that I shall not go into details here). That’s all I want to share with you today. I hope it helps you if you have been thinking about your CSS toolchain. So, You Want to Give Up CSS Pre- and Post-Processors… originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  8. by: Abhishek Prakash Thu, 17 Apr 2025 06:27:20 GMT It's the release week. Fedora 42 is already out. Ubuntu 25.04 will be releasing later today along with its flavors like Kubuntu, Xubuntu, Lubuntu etc. In the midst of these two heavyweights, MX Linux and Manjaro also quickly released newer versions. For Manjaro, it is more of an ISO refresh, as it is a rolling release distribution. Overall, a happening week for Linux lovers 🕺 💬 Let's see what else you get in this edition Arco Linux bids farewell.Systemd working on its own Linux distro.Looking at the origin of UNIX.And other Linux news, tips, and, of course, memes!This edition of FOSS Weekly is supported by Aiven.❇️ Aiven for ClickHouse® - The Fastest Open Source Analytics Database, Fully ManagedClickHouse processes analytical queries 100-1000x faster than traditional row-oriented systems. Aiven for ClickHouse® gives you the lightning-fast performance of ClickHouse–without the infrastructure overhead. Just a few clicks is all it takes to get your fully managed ClickHouse clusters up and running in minutes. With seamless vertical and horizontal scaling, automated backups, easy integrations, and zero-downtime updates, you can prioritize insights–and let Aiven handle the infrastructure. Managed ClickHouse database | AivenAiven for ClickHouse® – fully managed, maintenance-free data warehouse ✓ All-in-one open source cloud data platform ✓ Try it for freeAiven📰 Linux and Open Source NewsThe Arch-based ArcoLinux has been discontinued.Fedora 42 has been released with some rather interesting changes.Manjaro 25.0 'Zetar' is here, offering a fresh image for new installations. ParticleOS is Systemd's attempt at a Linux distribution. ParticleOS: Systemd’s Very Own Linux Distro in MakingA Linux distro from systemd? Sounds interesting, right?It's FOSS NewsSourav Rudra🧠 What We’re Thinking AboutLinus Torvalds was told that Git is more popular than Linux. Git is More Popular than Linux: TorvaldsLinus Torvalds reflects on 20 years of Git.It's FOSS NewsSourav Rudra🧮 Linux Tips, Tutorials and More11 vibe coding tools to 10x your dev workflow.Adding comments in bash scripts.Understand the difference between Pipewire and Pulseaudio.Make your Logseq notes more readable by formatting them. That's a new series focusing on Logseq.From UNIX to today’s tech. Learn how it shaped the digital world. Desktop Linux is mostly neglected by the industry but loved by the community. For the past 12 years, It's FOSS has been helping people use Linux on their personal computers. And we are now facing the existential threat from AI models stealing our content. If you like what we do and would love to support our work, please become It's FOSS Plus member. It costs $24 a year (less than the cost of a burger meal each month) and you get an ad-free reading experience with the satisfaction of helping the desktop Linux community. Join It's FOSS Plus 👷 Homelab and Maker's CornerThese 28 cool Raspberry Pi Zero W projects will keep you busy. 28 Super Cool Raspberry Pi Zero W Project IdeasWondering what to do with your Raspberry Pi Zero W? Here are a bunch of project ideas you can spend some time on and satisfy your DIY craving.It's FOSSChinmay✨ Apps HighlightYou can download YouTube videos using Seal on Android. Seal: A Nifty Open Source Android App to Download YouTube Video and AudioDownload YouTube video/music (for educational purpose or with consent) with this little, handy Android app.It's FOSS NewsSourav Rudra📽️ Videos I am Creating for YouSee the new features of Ubuntu 25.04 in action in this video. Subscribe to It's FOSS YouTube Channel🧩 Quiz TimeOur Guess the Desktop Environment Crossword will test your knowledge. Guess the Desktop Environment: CrosswordTest your desktop Linux knowledge with this simple crossword puzzle. Can you solve it all correctly?It's FOSSAbhishek PrakashAlternatively, guess all of these open source privacy tools correctly? Know The Best Open-Source Privacy ToolsDo you utilize open-source tools for privacy?It's FOSSAnkush Das💡 Quick Handy TipYou can make Thunar open a new tab instead of a new window. This is good in situations when opening a folder from other apps, like a web browser. This reduces screen clutter. First, click on Edit ⇾ Preferences. Here, go to the Behavior tab. Now, under "Tabs and Windows", enable the first checkbox as shown above or all three if you need the functionality of the other two. 🤣 Meme of the WeekWe are generally a peaceful bunch, for the most part. 🫣 🗓️ Tech TriviaOn April 16, 1959, John McCarthy publicly introduced LISP, a programming language for AI that emphasized symbolic computation. This language remains influential in AI research today. 🧑‍🤝‍🧑 FOSSverse CornerFOSSers are discussing VoIP, do you have any insights to add here? A discussion over Voice Over Internet Protocol (VoIP)I live in a holiday village where we have several different committees and meetings, for those not present to attend the meetings we do video conférences using voip. A few years back the prefered system was skype, we changed to whatsapp last year as we tend to use its messaging facilities and its free. We have a company who manages our accounts, they prefer using teams, paid for version as they can invoice us for its use … typical accountant. My question, does it make any difference in band w…It's FOSS Communitycallpaul.eu (Paul)❤️ With loveShare it with your Linux-using friends and encourage them to subscribe (hint: it's here). Share the articles in Linux Subreddits and community forums. Follow us on Google News and stay updated in your News feed. Opt for It's FOSS Plus membership and support us 🙏 Enjoy FOSS 😄
  9. by: Preethi Wed, 16 Apr 2025 12:34:50 +0000 This article covers tips and tricks on effectively utilizing the CSS backdrop-filter property to style contemporary user interfaces. You’ll learn how to layer backdrop filters among multiple elements, and integrate them with other CSS graphical effects to create elaborate designs. Below is a hodgepodge sample of what you can build based on everything we’ll cover in this article. More examples are coming up. CodePen Embed Fallback The blurry, frosted glass effect is popular with developers and designers these days — maybe because Josh Comeau wrote a deep-dive about it somewhat recently — so that is what I will base my examples on. However, you can apply everything you learn here to any relevant filter. I’ll also be touching upon a few of them in my examples. What’s essential in a backdrop filter? If you’re familiar with CSS filter functions like blur() and brightness(), then you’re also familiar with backdrop filter functions. They’re the same. You can find a complete list of supported filter functions here at CSS-Tricks as well as over at MDN. The difference between the CSS filter and backdrop-filter properties is the affected part of an element. Backdrop filter affects the backdrop of an element, and it requires a transparent or translucent background in the element for its effect to be visible. It’s important to remember these fundamentals when using a backdrop filter, for these reasons: to decide on the aesthetics, to be able to layer the filters among multiple elements, and to combine filters with other CSS effects. The backdrop Design is subjective, but a little guidance can be helpful. If you’ve applied a blur filter to a plain background and felt the result was unsatisfactory, it could be that it needed a few embellishments, like shadows, or more often than not, it’s because the backdrop is too plain. Plain backdrops can be enhanced with filters like brightness(), contrast(), and invert(). Such filters play with the luminosity and hue of an element’s backdrop, creating interesting designs. Textured backdrops complement distorting filters like blur() and opacity(). <main> <div> <section> <h1>Weather today</h1> Cloudy with a chance of meatballs. Ramenstorms at 3PM that will last for ten minutes. </section> </div> </main> main { background: center/cover url("image.jpg"); box-shadow: 0 0 10px rgba(154 201 255 / 0.6); /* etc. */ div { backdrop-filter: blur(10px); color: white; /* etc. */ } } CodePen Embed Fallback Layering elements with backdrop filters As we just discussed, backdrop filters require an element with a transparent or translucent background so that everything behind it, with the filters applied, is visible. If you’re applying backdrop filters on multiple elements that are layered above one another, set a translucent (not transparent) background to all elements except the bottommost one, which can be transparent or translucent, provided it has a backdrop. Otherwise, you won’t see the desired filter buildup. <main> <div> <section> <h1>Weather today</h1> Cloudy with a chance of meatballs. Ramenstorms at 3PM that will last for ten minutes. </section> <p>view details</p> </div> </main> main { background: center/cover url("image.jpg"); box-shadow: 0 0 10px rgba(154 201 255 / 0.6); /* etc. */ div { background: rgb(255 255 255 / .1); backdrop-filter: blur(10px); /* etc. */ p { backdrop-filter: brightness(0) contrast(10); /* etc. */ } } } CodePen Embed Fallback Combining backdrop filters with other CSS effects When an element meets a certain criterion, it gets a backdrop root (not yet a standardized name). One criterion is when an element has a filter effect (from filter and background-filter). I believe backdrop filters can work well with other CSS effects that also use a backdrop root because they all affect the same backdrop. Of those effects, I find two interesting: mask and mix-blend-mode. Combining backdrop-filter with mask resulted in the most reliable outcome across the major browsers in my testing. When it’s done with mix-blend-mode, the blur backdrop filter gets lost, so I won’t use it in my examples. However, I do recommend exploring mix-blend-mode with backdrop-filter. Backdrop filter with mask Unlike backdrop-filter, CSS mask affects the background and foreground (made of descendants) of an element. We can use that to our advantage and work around it when it’s not desired. <main> <div> <div class="bg"></div> <section> <h1>Weather today</h1> Cloudy with a chance of meatballs. Ramenstorms at 3PM that will last for ten minutes. </section> </div> </main> main { background: center/cover url("image.jpg"); box-shadow: 0 0 10px rgba(154 201 255 / 0.6); /* etc. */ > div { .bg { backdrop-filter: blur(10px); mask-image: repeating-linear-gradient(90deg, transparent, transparent 2px, white 2px, white 10px); /* etc. */ } /* etc. */ } } CodePen Embed Fallback Backdrop filter for the foreground We have the filter property to apply graphical effects to an element, including its foreground, so we don’t need backdrop filters for such instances. However, if you want to apply a filter to a foreground element and introduce elements within it that shouldn’t be affected by the filter, use a backdrop filter instead. <main> <div class="photo"> <div class="filter"></div> </div> <!-- etc. --> </main> .photo { background: center/cover url("photo.jpg"); .filter { backdrop-filter: blur(10px) brightness(110%); mask-image: radial-gradient(white 5px, transparent 6px); mask-size: 10px 10px; transition: backdrop-filter .3s linear; /* etc.*/ } &:hover .filter { backdrop-filter: none; mask-image: none; } } In the example below, hover over the blurred photo. CodePen Embed Fallback There are plenty of ways to play with the effects of the CSS backdrop-filter. If you want to layer the filters across stacked elements then ensure the elements on top are translucent. You can also combine the filters with other CSS standards that affect an element’s backdrop. Once again, here’s the set of UI designs I showed at the beginning of the article, that might give you some ideas on crafting your own. CodePen Embed Fallback References backdrop-filter (CSS-Tricks) backdrop-filter (MDN) Backdrop root (CSS Filter Effects Module Level 2) Filter functions (MDN) Using CSS backdrop-filter for UI Effects originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  10. by: Abhishek Kumar Tue, 15 Apr 2025 05:41:55 GMT Once upon a time, coding meant sitting down, writing structured logic, and debugging for hours. Fast-forward to today, and we have Vibe Coding, a trend where people let AI generate entire chunks of code based on simple prompts. No syntax, no debugging, no real understanding of what’s happening under the hood. Just vibes. Coined by OpenAI co-founder Andrej Karpathy, Vibe Coding is the act of developing software by giving natural language instructions to AI and accepting whatever it spits out. Source : XSome people even take it a step further by using voice-to-text tools so they don’t have to type at all. Just describe your dream app, and boom, the AI makes it for you. Or does it? People are building full-fledged SaaS products in days, launching MVPs overnight, and somehow making more money than seasoned engineers who swear by Agile methodologies. And here I am, writing about them instead of cashing in myself. Life isn’t fair, huh? But don’t get me wrong, I’m not here to hate. I’m here to expand on this interesting movement and hand you the ultimate arsenal to embrace vibe coding with these tools. ✋Non-FOSS Warning! Some of the applications mentioned here may not be open source. They have been included in the context of Linux usage. Also, some tools provide interface for popular, commercial LLMs like ChatGPT and Claude.1. Aider - AI pair programming in your terminalAider is the perfect choice if you're looking for a pair programmer to help you ship code faster. It allows you to pair programs with LLMs to edit code in your local GitHub repository. You can start a new project or work with an existing GitHub repo—all from your terminal. Key Features ✅ Aider works best with Claude 3.7 Sonnet, DeepSeek R1 & Chat V3, OpenAI o1, o3-mini & GPT-4o, but can connect to almost any LLM, including local models. ✅ Aider makes a map of your entire codebase, which helps it work well in larger projects. ✅ Supports most popular programming languages: Python, JavaScript, Rust, Ruby, Go, C++, PHP, HTML, CSS, and more. ✅ Automatically commits changes with sensible commit messages. Use familiar Git tools to easily diff, manage, and undo AI changes. ✅ Use Aider from within your favorite IDE or editor. Ask for changes by adding comments to your code, and Aider will get to work. ✅ Add images and web pages to the chat to provide visual context, screenshots, and reference docs. ✅ Automatically lint and test your code every time Aider makes changes. It can fix problems detected by linters and test suites. ✅ Works best with LLM APIs but also supports web chat interfaces, making copy-pasting code seamless. Aider2. VannaAI - Chat with SQL DatabaseWriting SQL queries can be tedious, but VannaAI changes that by letting you interact with SQL databases using natural language. Instead of manually crafting queries, you describe what you need, and VannaAI generates the SQL for you. It Works in two steps, Train a RAG "model" on your data and then ask questions that return SQL queries. Key Features ✅ Out-of-the-box support for Snowflake, BigQuery, Postgres, and more. ✅ The Vanna Python package and frontend integrations are all open-source, allowing deployment on your infrastructure. ✅ Database contents are never sent to the LLM unless explicitly enabled. ✅ Improves continuously by augmenting training data. ✅ Use Vanna in Jupyter Notebooks, Slackbots, web apps, Streamlit apps, or even integrate it into your own web app. VannaAI makes querying databases as easy as having a conversation, making it a game-changer for both technical and non-technical users. Vanna AI3. All Hands - Open source agents for developersAll Hands is an open-source platform for AI developer agents, capable of building projects, adding features, debugging, and more. Competing with Devin, All Hands recently topped the SWE-bench leaderboard with 53% accuracy. Key Features ✅ Use All Hands via an interactive GUI, command-line interface (CLI), or non-interactive modes like headless execution and GitHub Actions. ✅ Open-source freedom, built under the MIT license to ensure AI technology remains accessible to all. ✅ Handles complex tasks, from code generation to debugging and issue fixing. ✅ Developed in collaboration with AI safety experts like Invariant Labs to balance innovation and security. To get started, install Docker 26.0.0+ and run OpenHands using the provided Docker commands. Once running, configure your LLM provider and start coding with AI-powered assistance. All Hands4. Continue - Leading AI-powered code assistantYou must have heard about Cursor IDE, the popular AI-powered IDE; Continue is similar to it but open source under Apache license. It is highly customizable and lets you add any language model for auto-completion or chat. This can immensely improve your productivity. You can add Continue to VScode and JetBrains. Key Features ✅ Continue autocompletes single lines or entire sections of code in any programming language as you type. ✅ Attach code or other context to ask questions about functions, files, the entire codebase, and more. ✅ Select code sections and press a keyboard shortcut to rewrite code from natural language. ✅ Works with Ollama, OpenAI, Together, Anthropic, Mistral, Azure OpenAI Service, and LM Studio. ✅ Codebase, GitLab Issues, Documentation, Methods, Confluence pages, Files. ✅ Data blocks, Docs blocks, Rules blocks, MCP blocks, Prompts blocks. Continue5. Wave - Terminal with local LLMsWave terminal introduces BYOLLM (Bring Your Own Large Language Model), allowing users to integrate their own local or cloud-based LLMs into their workflow. It currently supports local LLM providers such as Ollama, LM Studio, llama.cpp, and LocalAI while also enabling the use of any OpenAI API-compatible model. Key Features ✅ Use local or cloud-based LLMs, including OpenAI-compatible APIs. ✅ Seamlessly integrate LLM-powered responses into your terminal workflow. ✅ Set the AI Base URL and AI Model in the settings or via CLI. ✅ Plans to include support for commercial models like Gemini and Claude. Waveterm6. Warp terminal - Agent mode (not open source)After WaveTerm, we have another amazing contender in the AI-powered terminal space, Warp Terminal. I personally use this so I may sound biased. 😛 It’s essentially an AI-powered assistant that can understand natural language, execute commands, and troubleshoot issues interactively. Instead of manually looking up commands or switching between documentation, you can simply describe the task in English and let Agent Mode guide you through it. Key Features ✅ No need to remember complex CLI commands, just type what you want, like "Set up an Nginx reverse proxy with SSL", and Agent Mode will handle the details. ✅ Ran into a “port 3000 already in use” error? Just type "fix it", and Warp will suggest running kill $(lsof -t -i:3000). If that doesn’t work, it’ll refine the approach automatically. ✅ Works seamlessly with Git, AWS, Kubernetes, Docker, and any other tool with a CLI. If it doesn’t know a command, you can tell it to read the help docs, and it will instantly learn how to use the tool. ✅ Warp doesn’t send anything to the cloud without your permission. You approve each command before it runs, and it only reads outputs when explicitly allowed. It seems like Warp is moving from a traditional AI-assisted terminal to an interactive AI-powered shell, making the command line much more intuitive. Would you consider switching to it, or do you think this level of automation might be risky for some tasks? Warp Terminal7. Pieces : AI extension to IDE (not open source)Pieces isn’t a code editor itself, it’s an AI-powered extension that supercharges editors like VS Code, Sublime Text, Neovim and many more IDE's with real-time intelligence and memory. Its highlighted feature is Long-Term Memory Agent that captures up to 9 months of coding context, helping you seamlessly resume work, even after a long break. Everything runs locally for full privacy. It understands your code, recalls snippets, and blends effortlessly into your dev tools to eliminate context switching. Bonus: it’s free for now, with a free tier promised forever, but they will start charging soon, so early access might come with perks. Key Features ✅ Stores 9 months of local coding context ✅ Integrates with Neovim, VS Code, and Sublime Text ✅ Fully on-device AI with zero data sharing ✅ Context-aware suggestions via Pieces Copilot ✅ Organize and share snippets using Pieces Drive ✅ Always-free tier promised, with early adopter perks Pieces8. Aidermacs: AI aided coding in EmacsAidermacs by MatthewZMD is for the Emacs power users who want that sweet Cursor-style AI experience; but without leaving their beloved terminal. It’s a front-end for the open-source Aider, bringing powerful pair programming into Emacs with full respect for its workflows and philosophy. Whether you're using GPT-4, Claude, or even DeepSeek, Aidermacs auto-detects your available models and lets you chat with them directly inside Emacs. And yes, it's deeply customizable, as all good Emacs things should be. Key Features ✅ Integrates Aider into Emacs for collaborative coding ✅ Intelligent model selection from OpenAI, Anthropic, Gemini, and more ✅ Built-in Ediff for side-by-side AI-generated changes ✅ Fine-grained file control: edit, read-only, scratchpad, and external ✅ Fully theme-aware with Emacs-native UI integration ✅ Works well in terminal via vterm with theme-based colors Aidermacs9. Jeddict AI AssistantThis one is for my for the Java folks, It’s a plugin for Apache NetBeans. I remember using NetBeans back in school, and if this AI stuff was around then, I swear I would've aced my CS practicals. This isn’t your average autocomplete tool. Jeddict AI Assistant brings full-on AI integration into your IDE: smarter code suggestions, context-aware documentation, SQL query help, even commit messages. It's especially helpful if you're dealing with big Java projects and want AI that actually understands what’s going on in your code. Key Features ✅ Smart, inline code completions using OpenAI, DeepSeek, Mistral, and more ✅ AI chat with full awareness of project/class/package context ✅ Javadoc creation & improvement with a single shortcut ✅ Variable renaming, method refactoring, and grammar fixes via AI hints ✅ SQL query assistance & inline completions in the database panel ✅ Auto-generated Git commit messages based on your diffs ✅ Custom rules, file context preview, and experimental in-editor updates ✅ Fully customizable AI provider settings (supports LM Studio, Ollama, GPT4All too!) Jeddict AI Assistant10. Amazon CodeWhispererIf your coding journey revolves around AWS services, then Amazon CodeWhisperer might be your ideal AI-powered assistant. While it works like other AI coding tools, its real strength lies in its deep integration with AWS SDKs, Lambda, S3, and DynamoDB. CodeWhisperer is fine-tuned for cloud-native development, making it a go-to choice for developers building serverless applications, microservices, and infrastructure-as-code projects. Since it supports Visual Studio Code and JetBrains IDEs, AWS developers can seamlessly integrate it into their workflow and get AWS-specific coding recommendations that follow best practices for scalability and security. Plus, individual developers get free access, making it an attractive option for solo builders and startup developers. Key Features ✅ Optimized code suggestions for AWS SDKs and cloud services. ✅ Built-in security scanning to detect vulnerabilities. ✅ Supports Python, Java, JavaScript, and more. ✅ Free for individual developers. Amazon CodeWhisperer11. Qodo AI (previously Codium)If you’ve ever been frustrated by the limitations of free AI coding tools, qodo might be the answer. Supporting over 50 programming languages, including Python, Java, C++, and TypeScript, qodo integrates smoothly with Visual Studio Code, IntelliJ, and JetBrains IDEs. It provides intelligent autocomplete, function suggestions, and even code documentation generation, making it a versatile tool for projects of all sizes. While it may not have some of the advanced features of paid alternatives, its zero-cost access makes it a game-changer for budget-conscious developers. Key Features ✅ Unlimited free code completions with no restrictions. ✅ Supports 50+ programming languages, including Python, Java, and TypeScript. ✅ Works with popular IDEs like Visual Studio Code and JetBrains. ✅ Lightweight and responsive, ensuring a smooth coding experience. QodoFinal thoughts📋I deliberately skipped IDEs from this list. I have a separate list of editors for vibe coding on Linux.With time, we’re undoubtedly going to see more AI-assisted coding take center stage. As Anthropic CEO Dario Amodei puts it, AI will write 90% of code within six months and could automate software development entirely within a year. Whether that’s an exciting leap forward or a terrifying thought depends on how much you trust your AI pair programmer. If you’re diving into these tools, I highly recommend brushing up on the basics of coding and version control. AI can write commands for you, but if you don’t know what it’s doing, you might go from “I just built the next billion-dollar SaaS!” to “Why did my AI agent just delete my entire codebase?” in a matter of seconds. XThat said, this curated list of amazing open-source tools should get you started. Whether you're a seasoned developer or just someone who loves typing cool things into a terminal, these tools will level up your game. Just remember: the AI can vibe with you, but at the end of the day, you're still the DJ of your own coding playlist (sorry for the cringy line 👉👈).
  11. by: Chris Coyier Mon, 14 Apr 2025 16:36:55 +0000 I joked while talking with Adam Argyle on ShopTalk the other day that there is more CSS in one of the demos we were looking at that I have in my whole CSS brain. We were looking at his Carousel Gallery which is one of the more impressive sets of CSS demos I’ve ever seen. Don’t let your mind get too stuck on that word “carousel”. I think it’s smart to use that word here, but the CSS technologies being developed here have an incredible number of uses. Things that relate to scrolling interactivity, inertness, column layout, and more. Some of it is brand spanking new. In fact just a few weeks ago, I linked up the Carousel Configurator and said: Which was kind of true at the time, but the features aren’t that experimental anymore. All the features went live in Chrome 135 which is in stable release now for the world. Of course, you’ll need to think in terms of progressive enhancement if you’re looking to roll this stuff out to production, but this is real world movement on some huge stuff for CSS. This stuff is in the category where, looking a few years out, it’s a real mistake if carousels and carousel-like behavior isn’t built this way. This is the way of best performance, best semantics, and best accessibility, which ain’t gonna get beat with your MooTools Super Slider ok. Brecht is already bloggin’ about it. That’s a finger on the pulse right there. What else is pretty hot ‘n’ fresh in CSS land? CSS multicol block direction wrapping by Rachel Andrew — The first implementation of of columns being able to wrap down instead of across. Useful. Can you un-mix a mixin? by Miriam Suzanne — Mixins are likely to express themselves as @apply in CSS eventually (despite being abandoned on purpose once?). We can already sort of do it with custom properties and style queries, which actually have the desirable characteristic of cascading. What will @apply do to address that? Feature detect CSS @starting-style support by Bramus Van Damme — Someday, @supports at-rule(@starting-style) {} will work, but there (🫥) is no browser support for that yet. There is a way to do it with the space toggle trick fortunately (which is one of the most mind bending things ever in CSS if you ask me). I feel like mentioning that I was confused how to test a CSS function recently, but actually since they return values, it’s not that weird. I needed to do @supports (color: light-dark(white, black) {} which worked fine. Related to @starting-style, this is a pretty good article. New Values and Functions in CSS by Alvaro Montoro — speaking of new functions, there are a good number of them, like calc-size(), first-valid(), sibling-index(), random-item(), and more. Amazing. A keyframe combo trick by Adam Argyle — Two animations on a single element, one for the page load and one for a scroll animation. They fight. Or do they? Container Queries Unleashed by Josh Comeau — If you haven’t boned up on the now-available-everyone @container stuff, it rules, and Josh does a great job of explaining why. A Future of Themes with CSS Inline if() Conditions by Christopher Kirk-Nielsen — Looks like if() in CSS behaves like a switch in other languages and what you’re doing is checking if the value of a custom property is equal to a certain value, then returning whatever value you want. Powerful! Chris is building something like light-dark() here except with more than two themes and where the themes effect more than just color.
  12. by: Declan Chidlow Mon, 14 Apr 2025 12:40:46 +0000 The cursor is a staple of the desktop interface but is scarcely touched by websites. This is for good reason. People expect their cursors to stay fairly consistent, and meddling with them can unnecessarily confuse users. Custom cursors also aren’t visible for people using touch interfaces — which excludes the majority of people. Geoff has already covered styling cursors with CSS pretty comprehensively in “Changing the Cursor with CSS for Better User Experience (or Fun)” so this post is going to focus on complex and interesting styling. Custom cursors with JavaScript Custom cursors with CSS are great, but we can take things to the next level with JavaScript. Using JavaScript, we can use an element as our cursor, which lets us style it however we would anything else. This lets us transition between cursor states, place dynamic text within the cursor, apply complex animations, and apply filters. In its most basic form, we just need a div that continuously positions itself to the cursor location. We can do this with the mousemove event listener. While we’re at it, we may as well add a cool little effect when clicking via the mousedown event listener. CodePen Embed Fallback That’s wonderful. Now we’ve got a bit of a custom cursor going that scales on click. You can see that it is positioned based on the mouse coordinates relative to the page with JavaScript. We do still have our default cursor showing though, and it is important for our new cursor to indicate intent, such as changing when hovering over something clickable. We can disable the default cursor display completely by adding the CSS rule cursor: none to *. Be aware that some browsers will show the cursor regardless if the document height isn’t 100% filled. We’ll also need to add pointer-events: none to our cursor element to prevent it from blocking our interactions, and we’ll show a custom effect when hovering certain elements by adding the pressable class. CodePen Embed Fallback Very nice. That’s a lovely little circular cursor we’ve got here. Fallbacks, accessibility, and touchscreens People don’t need a cursor when interacting with touchscreens, so we can disable ours. And if we’re doing really funky things, we might also wish to disable our cursor for users who have the prefers-reduced-motion preference set. We can do this without too much hassle: CodePen Embed Fallback What we’re doing here is checking if the user is accessing the site with a touchscreen or if they prefer reduced motion and then only enabling the custom cursor if they aren’t. Because this is handled with JavaScript, it also means that the custom cursor will only show if the JavaScript is active, otherwise falling back to the default cursor functionality as defined by the browser. const isTouchDevice = "ontouchstart"in window || navigator.maxTouchPoints > 0; const prefersReducedMotion = window.matchMedia("(prefers-reduced-motion: reduce)").matches; if (!isTouchDevice && !prefersReducedMotion && cursor) { // Cursor implementation is here } Currently, the website falls back to the default cursors if JavaScript isn’t enabled, but we could set a fallback cursor more similar to our styled one with a bit of CSS. Progressive enhancement is where it’s at! Here we’re just using a very basic 32px by 32px base64-encoded circle. The 16 values position the cursor hotspot to the center. html { cursor: url("data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIzMiIgaGVpZ2h0PSIzMiIgdmlld0Jve D0iMCAwIDMyIDMyIj4KICA8Y2lyY2xlIGN4PSIxNiIgY3k9IjE2IiByPSIxNiIgZmlsbD0iYmxhY2siIC8+Cjwvc3ZnPg==") 16 16, auto; } Taking this further Obviously this is just the start. You can go ballistic and completely overhaul the cursor experience. You can make it invert what is behind it with a filter, you can animate it, you could offset it from its actual location, or anything else your heart desires. As a little bit of inspiration, some really cool uses of custom cursors include: Studio Mesmer switches out the default cursor for a custom eye graphic when hovering cards, which is tasteful and fits their brand. Iara Grinspun’s portfolio has a cursor implemented with JavaScript that is circular and slightly delayed from the actual position which makes it feel floaty. Marlène Bruhat’s portfolio has a sleek cursor that is paired with a gradient that appears behind page elements. Aleksandr Yaremenko’s portfolio features a cursor that isn’t super complex but certainly stands out as a statement piece. Terra features a giant glowing orb containing text describing what you’re hovering over. Please do take care when replacing browser or native operating system features in this manner. The web is accessible by default, and we should take care to not undermine this. Use your power as a developer with taste and restraint. Next Level CSS Styling for Cursors originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  13. by: Abhishek Prakash Mon, 14 Apr 2025 10:58:44 +0530 Lately, whenever I tried accessing a server via SSH, it asked for a passphrase: Enter passphrase for key '/home/abhishek/.ssh/id_rsa':Interestingly, it was asking for my local system's account password, not the remote server's. Entering the account password for SSH key is a pain. So, I fixed it with this command which basically resets the password: ssh-keygen -pIt then asked for the file which has the key. This is the private ssh key, usually located in .ssh/id_rsa file. I provided the absolute path for that. Now it asked for the 'old passphrase' which is the local user account password. I provided it one more time and then just pressed enter for the new passphrase. ❯ ssh-keygen -p Enter file in which the key is (/home/abhishek/.ssh/id_ed25519): /home/abhishek/.ssh/id_rsa Enter old passphrase: Enter new passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved with the new passphrase.And thus, it didn't ask you to enter passphrase for the SSH private key anymore. Did not even need a reboot or anything. Wondering why it happened and how it was fixed? Let's go in detail. What caused 'Enter passphrase for key' issue?Here is my efficient SSH workflow. I have the same set of SSH keys on my personal systems, so I don't have to create them new and add them to the servers when I install a new distro. Since the public SSH key is added to the servers, I don't have to enter the root password for the servers every time I use SSH. And then I have an SSH config file in place that maps the server's IP address with an easily identifiable name. It further smoothens my workflow. Recently, I switched my personal system to CachyOS. I copied my usual SSH keys from an earlier backup and gave them the right permission. But when I tried accessing any server, it asked for a passphrase: Enter passphrase for key '/home/abhishek/.ssh/id_rsa':No, it was not the remote server's user-password. It asked for my regular, local system's password as if I were using sudo. I am guessing that some settings somewhere were left untouched and it started requiring a password to unlock the private SSH key. This is an extra layer of security, and I don't like the inconvenience that comes with it. One method to use SSH without entering the password each time to unlock is to reset the password on the SSH key. And that's what you saw at the beginning of this article. Fixing it by resetting the password on SSH keyNote down the location of your SSH private key. Usually, it is ~/.ssh/id_rsa unless you have multiple SSH key sets for different servers. Enter the following command to reset the password on an SSH key: ssh-keygen -pIt will ask you for the path to key. Provide the absolute path to your private SSH key. Enter file in which the key is (/home/abhishek/.ssh/id_ed25519):It then asks to enter old passphrase which should your local account's password. The same one that you use for sudo. Enter old passphrase:Once you have entered that, it will ask you to enter new passphrase. Keep it empty by pressing the enter key. This way, it won't have any password. Enter new passphrase (empty for no passphrase):Press enter key again when it asks: Enter same passphrase again:And that's about it. You can instantly verify it. You don't need to reboot the system or even log out from the terminal. Enjoy SSH 😄
  14. Blogger posted a blog entry in F.O.S.S
    by: John Paul Wohlscheid Sun, 13 Apr 2025 14:34:36 GMT Sometimes it feels like Unix has been around forever, at least to users who have used Linux, or BSD in any form for a decade or more now. Its ideals laid the groundwork for Linux, and it underpins macOS. A modern version (FreeBSD) is used on thousands of servers while Linux rules the server space along with the super computer industry. Even though the original form of it is a history, it remains a significant development to help start Linux and more. But initially, it had a rocky start and had to be developed in secret. Punch Cards and MulticsBack in the days when computers took up whole rooms, the main method of using computers was the punch card interface. Computers didn't come with an operating system, they had a programming language built into them. If you wanted to run a program, you had to use a device to enter your program and the data on a series of punch cards. According to an interview with Brian Kernighan, one of the Unix creators, "So if you had a 1,000-line program, you would have 1,000 cards. There were no screens, no interactive output. You gave your cards to the computer operator and waited for your printout that was the result of your program." At the time, all text output from these computers was capitalized. Kernighan wrote an application to handle the formatting of his thesis. "And so thesis was basically three boxes of cards, 6,000 cards in each box, probably weighed 10, 12 pounds, five kilograms. And so you’d take these three boxes, 1,000 cards of which the first half of the first box was the program and then the remaining 5,000 cards was the thesis. And you would take those three boxes and you’d hand them to the operator. And an hour or two or three later back would come a printed version of thesis again." Needless to say, this makes modern thesis writing seem effortless, right? In the late 1950s, AT&T, Massachusetts Institute of Technology, and General Electric created a project to revolutionize computing and push it beyond the punch card. The project was named Multics or “Multiplexed Information and Computing Service”. According to the paper that laid out the plans for the project, there were nine major goals: Convenient remote terminal use.Continuous operation analogous to power & telephone services.A wide range of system configurations, changeable without system or user program reorganization.A high reliability internal file system.Support for selective information sharing.Hierarchical structures of information for system administration and decentralization of user activities.Support for a wide range of applications.Support for multiple programming environments & human interfaces.The ability to evolve the system with changes in technology and in user aspirations.Multics would be a time-sharing computer, instead of relying on punch cards. This means that users could log into the system via a terminal and use it for an allotted period of time. This would turn the computer from a system administered by a high priest class (Steven Levy mentioned this concept in his book Hackers.) to something that could be accessed by anyone with the necessary knowledge. The project was very ambitious. Unfortunately, turning ideas into reality takes time. Bell Labs withdrew from the project in 1969. They had joined the project to get a time-sharing operating system for their employees, but there had been little progress. The lessons learned from Multics eventually helped in the creation of Unix, more on that below. To Space BeyondImage Credits: Multicians / A team installing GE 645 mainframe in ParisThe Bell engineers who had worked on Multics (including Ken Thompson and Dennis Ritchie) were left without an operating system, but tons of ideas. In the last days of their involvement in the Multics, they had started writing an operating system on a GE-645 mainframe. But then the project ended, and they no longer needed the mainframe. They lobbied their bosses to buy a mini-computer to start their own operating system project but were denied. They continued to work on the project in secret. Often they would get together and discuss what they would want in an operating system and sketch out ideas for the architecture. During this time, Thompson started working on a little side project. He wrote a game for the GE-645 named Space Travel. The game "simulated all the major bodies in the solar system along with a spaceship that could fly around them". Unfortunately, it was expensive to run on the mainframe. Each game cost $75 to play. So, Thompson went looking for a different, cheaper computer to use. He discovered a PDP-7 mini-computer left over from a previous project. He rewrote the game to run on the PDP-7. PDP-7, Image Credits: WikipediaIn the summer of 1969, Thompson's wife took their newborn son to visit her parents. Thompson took advantage of this time and newly learned programming skills to start writing an operating system for the PDP-7. Since he saw this new project as a cut-down version of Multics, he named it “Un-multiplexed Information and Computing Service," or Unics. It was eventually changed to Unix. Other Bell Labs employees joined the project. The team quickly ran into limitations with the hardware itself. The PDP-7 was in its early stages, so they had to figure out how to get their hands on a newer computer. They knew that their bosses would never buy a new system because "lab's management wasn't about to allow any more research on operating systems." At the time, Bell Labs produced lots of patents. According to Kernighan, "typically one or two a day at that point." It was time-consuming to create applications for those patents because the formatting required by the government was very specific. At the time, there were no commercial word processing programs capable of handling the formatting. The Unix group offered to write a program for the patent department that would run on a shiny new PDP-11. They also promised to have it done before any commercial software would be available to do the same. Of course, they failed to mention that they would need to write an operating system for the software to run on. Their bosses agreed to the proposal and placed an order for a PDP-11 in May 1970. The computer arrived quickly, but it took six months for the drives to arrive. PD-11/70, Image Credits: Wikipedia In the meantime, the team continued to write Unix on the PDP-7, making it the first platform where the first version of Unix developed. Once the PDP-11 was up and running, the team ported what they had to the new system. In short order, the new patent application software was unveiled to the patent department. It was a hit. The management was so pleased with the results, they bought the Unix team their own PDP-11. Growing and Legal ProblemsImage Credits: AmazonWith a more powerful computer at their command, work on Unix continued. In 1971, the team released its first official manual: The UNIX Programmer's Manual. The operating system was officially debuted to the world via a paper presented at the 1973 symposium of the Association for Computing Machinery. This was followed by a flood of requests for copies. This brought up new issues. AT&T, the company that financed Bell Labs, couldn't sell an operating system. In 1956, AT&T was forced by the US government to agree to a consent decree. This consent decree prohibited AT&T from "selling products not directly related to telephones and telecommunications, in return for its legal monopoly status in running the country's long-distance phone service." The solution was to release "the Unix source code under license to anyone who asked, charging only a nominal fee". The consent decree also prohibited AT&T from providing tech support. So, the code was essentially available as-is. This led to the creation of the first user groups as Unix adopters banded together to provide mutual assistance. C Programming, The Necessary CatalystThe creation of the C programming language by Dennis Ritchie at Bell Labs helped Unix make progress with its future versions, and indirectly influenced the ability to create BSD and Linux. And, now, we have many programming languages, operating systems, including several variants of Linux, BSD, and Unix-like operating systems as well.
  15. By: Janus Atienza Sat, 12 Apr 2025 18:30:58 +0000 Have you ever searched your name or your brand and found content that you didn’t expect to see? Maybe a page that doesn’t represent you well or something you want to keep track of for your records? If you’re using Linux or Unix, you’re in a great position to take control of that situation. With just a few simple tools, you can save, organize, and monitor any kind of web content with ease. This guide walks you through how to do that, step by step, using tools built right into your system. This isn’t just about removing content. It’s also about staying informed, being proactive, and using the strengths of Linux and Unix to help you manage your digital presence in a reliable way. Let’s take a look at how you can start documenting web content using your system. Why Organizing Online Content Is a Smart Move When something important appears online—like an article that mentions you, a review of your product, or even a discussion thread—it helps to keep a copy for reference. Many platforms and services ask for details if you want them to update or review content. Having all the right information at your fingertips can make things smoother. Good records also help with transparency. You’ll know exactly what was published and when, and you’ll have everything you need if you ever want to take action on it. Linux and Unix systems are perfect for this kind of work because they give you flexible tools to collect and manage web content without needing extra software. Everything you need is already available or easily installable. Start by Saving the Page with wget The first step is to make sure you have a full copy of the page you’re interested in. This isn’t just about saving a screenshot—it’s about capturing the full experience of the page, including images, links, and layout. You can do this with a built-in tool called wget. It’s easy to use and very reliable. Here’s a basic command: css CopyEdit wget –mirror –convert-links –adjust-extension –page-requisites –no-parent https://example.com/the-page This command downloads the full version of the page and saves it to your computer. You can organize your saved pages by date, using a folder name like saved_pages_2025-04-10 so everything stays neat and searchable. If you don’t have wget already, most systems let you install it quickly with a package manager like apt or yum. Keep a Log of Your Terminal Session If you’re working in the terminal, it’s helpful to keep a record of everything you do while gathering your content. This shows a clear trail of how you accessed the information. The script command helps with this. It starts logging everything that happens in your terminal into a text file. Just type: perl CopyEdit script session_log_$(date +%F_%H-%M-%S).txt Then go ahead and run your commands, visit links, or collect files. When you’re done, just type exit to stop the log. This gives you a timestamped file that shows everything you did during that session, which can be useful if you want to look back later. Capture Screenshots with a Timestamp Screenshots are one of the easiest ways to show what you saw on a page. In Linux or Unix, there are a couple of simple tools for this. If you’re using a graphical environment, scrot is a great tool for quick screenshots: nginx CopyEdit scrot ‘%Y-%m-%d_%H-%M-%S.png’ -e ‘mv $f ~/screenshots/’ If you have ImageMagick installed, you can use: perl CopyEdit import -window root ~/screenshots/$(date +%F_%H-%M-%S).png These tools save screenshots with the date and time in the filename, which makes it super easy to sort and find them later. You can also create a folder called screenshots in your home directory to keep things tidy. Use Checksums to Confirm File Integrity When you’re saving evidence or tracking content over time, it’s a good idea to keep track of your files’ integrity. A simple way to do this is by creating a hash value for each file. Linux and Unix systems come with a tool called sha256sum that makes this easy. Here’s how you can use it: bash CopyEdit sha256sum saved_page.html > hash_log.txt This creates a unique signature for the file. If you ever need to prove that the file hasn’t changed, you can compare the current hash with the original one. It’s a good way to maintain confidence in your saved content. Organize Your Files in Folders The key to staying organized is to keep everything related to one event or day in the same folder. You can create a structure like this: bash CopyEdit ~/web_monitoring/ 2025-04-10/ saved_page.html screenshot1.png session_log.txt hash_log.txt This kind of structure makes it easy to find and access your saved pages later. You can even back these folders up to cloud storage or an external drive for safekeeping. Set Up a Simple Monitor Script If you want to stay on top of new mentions or changes to a particular site or keyword, you can create a simple watch script using the command line. One popular method is to use curl to grab search results, then filter them with tools like grep. For example: bash CopyEdit curl -s “https://www.google.com/search?q=your+name” > ~/search_logs/google_$(date +%F).html You can review the saved file manually or use commands to highlight certain keywords. You can also compare today’s results with yesterday’s using the diff command to spot new mentions. Additionally if needed you can also go for how do you delete a google search result. To automate this, just create a cron job that runs the script every day: nginx CopyEdit crontab -e Then add a line like this: ruby CopyEdit 0 7 * * * /home/user/scripts/search_watch.sh This runs the script at 7 a.m. daily and stores the results in a folder you choose. Over time, you’ll build a personal archive of search results that you can refer to anytime. Prepare Your Submission Package If you ever need to contact a website or a service provider about a page, it’s helpful to have everything ready in one place. That way, you can share what you’ve collected clearly and professionally. Here’s what you might include: The exact URL of the page A brief explanation of why you’re reaching out A copy of the page you saved One or more screenshots A summary of what you’re requesting Some platforms also have forms or tools you can use. For example, search engines may provide an online form for submitting requests. If you want to contact a site directly, you can use the whois command to find the owner or hosting provider: nginx CopyEdit whois example.com This will give you useful contact information or point you toward the company that hosts the site. Automate Your Process with Cron Once you have everything set up, you can automate the entire workflow using cron jobs. These scheduled tasks let your system do the work while you focus on other things. For example, you can schedule daily page saves, keyword searches, or hash checks. This makes your documentation process consistent and thorough, without any extra effort after setup. Linux and Unix give you the tools to turn this into a fully automated system. It’s a great way to stay prepared and organized using technology you already have. Final Thoughts Linux and Unix users have a unique advantage when it comes to documenting web content. With simple tools like wget, script, and scrot, you can create a complete, organized snapshot of any page or event online. These tools aren’t just powerful—they’re also flexible and easy to use once you get the hang of them. The post Best Way to Document Harmful Content for Removal appeared first on Unixmen.

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.