Jump to content

Blogger

Blog Bot
  • Joined

  • Last visited

    Never

Blog Entries posted by Blogger

  1. by: Sreenath
    Wed, 23 Apr 2025 03:05:46 GMT

    Logseq provides all the necessary elements you need for creating your knowledge base.
    But one size doesn't fit all. You may need something extra that is either too complicated to achieve in Logseq or not possible at all.
    What do you do, then? You use external plugins and extensions.
    Thankfully, Logseq has a thriving marketplace where you can explore various plugins and extensions created by individuals who craved more from Logseq,
    Let me show you how you can install themes and plugins.
    🚧Privacy alert! Do note that plugins can access your graph and local files. You'll see this warning in Logseq as well. More granular permission control system is not yet available at the moment.Installing a plugin in Logseq
    Click on the top-bar menu button and select Plugins as shown in the screenshot below.
    Menu → PluginsIn the Plugins window, click on Marketplace.
    Click on Marketplace tabThis will open the Logseq Plugins Marketplace. You can click on the title of a plugin to get the details about that plugin, including a sample screenshot.
    Click on Plugin TitleIf you find the plugin useful, use the Install button adjacent to the Plugin in the Marketplace section.
    Install a PluginManaging Plugins
    To manage a plugin, like enable/disable, fine-tune, etc., go to Menu → Plugins. This will take you to the Manage Plugin interface.
    📋If you are on the Marketplace, just use the Installed tab to get all the installed plugins.Installed plugins sectionHere, you can enable/disable plugins in Logseq using the corresponding toggle button. Similarly, hover over the settings gear icon for a plugin and select Open Settings option to access plugin configuration.
    Click on Plugin settings gear iconInstalling themes in Logseq
    Logseq looks good by default to me but you can surely experiment with its looks by installing new themes.
    Similar to what you saw in plugin installation section, click on the Plugins option from Logseq menu button.
    Click on Menu → PluginsWhy did I not click the Themes option above? Well, because that is for switching themes, not installing.
    In the Plugins window, click on Marketplace section and select Themes.
    Select Marketplace → ThemesClick on the title of a theme to get the details, including screenshots.
    Logseq theme details pageTo install a theme, use the Install button adjacent to the theme in Marketplace.
    Click Install to install the themeEnable/disable themes in Logseq
    🚧Changing themes is not done in this window. Theme switching will be discussed below.All the installed themes will be listed in Menu → Plugins → Installed → Themes section.
    Installed themes listedFrom here, you can disable/enable themes using the toggle button.
    Changing themes
    Make sure all the desired installed themes are enabled because disabled themes won't be shown in the theme switcher.
    Click on the main menu button and select the Themes option.
    Click on Menu → ThemesThis will bring a drop-down menu interface from where you can select a theme. This is shown in the short video below.
    Updating plugins and themes
    Occasionally, plugins and themes will provide updates.
    To check for available plugin/theme updates, click on Menu → Plugins.
    Here, select the Installed section to access installed Themes and Plugins. There should be a Check for Update button for each item.
    Click on Check UpdateClick on it to check if any updates are available for the selected plugin/theme.
    Uninstall plugins and themes
    By now you know that in Logseq, both Plugins and themes are considered as plugins. So, you can uninstall both in the same way.
    First, click on Menu button and select the Plugins option.
    Click on the Menu and select PluginsHere, go to the Installed section. Now, if you want to remove an installed Plugin, go to the Plugins tab. Else, if you would like to remove an installed theme, go to the Themes tab.
    Select Plugins or Themes SectionHover over the settings gear of the item that needs to be removed and select the Uninstall button.
    Uninstall a Plugin or ThemeWhen prompted for confirmation, click on Yes, and the plugin/theme will be removed.
    Manage plugins from Logseq settings
    Logseq settings provides a neat place for tweaking the installed Plugins and themes if they provide some extra settings.
    Click on the menu button on the top-bar and select the Settings button.
    Click on Menu → SettingsIn the settings window, click on Plugins section.
    Click on Plugins Section in SettingsHere, you can get a list of plugins and themes that offer some tweaks.
    Plugin settings in Logseq Settings windowAnd that's all you need to know about exploring plugins and themes in Logseq. In the next tutorial in this series, I'll discuss special pages like Journal. Stay tuned.
  2. by: Chris Coyier
    Mon, 21 Apr 2025 17:10:35 +0000

    I enjoyed Trys Mudford’s explanation of making rounded triangular boxes. It was a very real-world client need, and I do tend to prefer reading about technical solutions to real problems over theoretical ones. This one was tricky because this particular shape doesn’t have a terribly obvious way to draw it on the web.
    CSS’ clip-path is useful, but the final rounding was done with an unintuitive feGaussianBlur SVG filter. You could draw it all in SVG, but I think the % values you get to use with clip-path are a more natural fit to web content than pure SVG is. SVG just wasn’t born in a responsive web design world.
    The thing is: SVG has a viewBox which is a fixed coordinate system on which you draw things. The final SVG can be scaled and squished and stuff, but it’s all happening on this fixed grid.
    I remember when trying to learn the <path d=""> syntax in SVG how it’s almost an entire language unto itself, with lots of different letters issues commands to a virtual pen. For example:
    That syntax for the d attribute (also expressed with the path() function) can be applied in CSS, but I always thought that was very weird. The numbers are “unitless” in SVG, and that makes sense because the numbers apply to that invisible fixed grid put in place by the viewBox. But there is no viewBox in regular web layout., so those unitless numbers are translated to px, and px also isn’t particularly responsive web design friendly.
    This was my mind’s context when I saw the Safari 18.4 new features. One of them being a new shape() function:
    Yes! I’m glad they get it. I felt like I was going crazy when I would talk about this issue and get met with blank stares.
    Tyrs got so close with clip-path: polygon() alone on those rounded arrow shapes. The % values work nicely for random amounts of content inside (e.g. the “nose” should be at 50% of the height) and if the shape of the arrow needed to be maintained px values could be mix-and-matched in there.
    But the rounding was missing. There is no rounding with polygon().
    Or so I thought? I was on the draft spec anyway looking at shape(), which we’ll circle back to, but it does define the same round keyword and provide geometric diagrams with expectations on how it’s implemented.
    There are no code examples, but I think it would look something like this:
    /* might work one day? */ clip-path: polygon(0% 0% round 0%, 75% 0% round 10px, 100% 50% round 10px, 75% 100% round 10px, 0% 100% round 0%); I’d say “draft specs are just… draft specs”, but stable Safari is shipping with stuff in this draft spec so I don’t know how all that works. I did test this syntax across the browsers and nothing supports it. If it did, Trys’ work would have been quite a bit easier. Although the examples in that post where a border follows the curved paths… that’s still hard. Maybe we need clip-path-border?
    There is precedent for rounding in “basic shape” functions already. The inset() function has a round keyword which produces a rounded rectangle (think a simple border-radius). See this example, which actually does work.
    But anyway: that new shape() function. It looks like it is trying to replicate (the entire?) power of <path d=""> but do it with a more CSS friendly/native syntax. I’ll post the current syntax from the spec to help paint the picture it’s a whole new language (🫥):
    <shape-command> = <move-command> | <line-command> | close | <horizontal-line-command> | <vertical-line-command> | <curve-command> | <smooth-command> | <arc-command> <move-command> = move <command-end-point> <line-command> = line <command-end-point> <horizontal-line-command> = hline [ to [ <length-percentage> | left | center | right | x-start | x-end ] | by <length-percentage> ] <vertical-line-command> = vline [ to [ <length-percentage> | top | center | bottom | y-start | y-end ] | by <length-percentage> ] <curve-command> = curve [ [ to <position> with <control-point> [ / <control-point> ]? ] | [ by <coordinate-pair> with <relative-control-point> [ / <relative-control-point> ]? ] ] <smooth-command> = smooth [ [ to <position> [ with <control-point> ]? ] | [ by <coordinate-pair> [ with <relative-control-point> ]? ] ] <arc-command> = arc <command-end-point> [ [ of <length-percentage>{1,2} ] && <arc-sweep>? && <arc-size>? && [rotate <angle>]? ] <command-end-point> = [ to <position> | by <coordinate-pair> ] <control-point> = [ <position> | <relative-control-point> ] <relative-control-point> = <coordinate-pair> [ from [ start | end | origin ] ]? <coordinate-pair> = <length-percentage>{2} <arc-sweep> = cw | ccw <arc-size> = large | small So instead of somewhat obtuse single-letter commands in the path syntax, these have more understandable names. Here’s an example again from the spec that draws a speech bubble shape:
    .bubble { clip-path: shape( from 5px 0, hline to calc(100% - 5px), curve to right 5px with right top, vline to calc(100% - 8px), curve to calc(100% - 5px) calc(100% - 3px) with right calc(100% - 3px), hline to 70%, line by -2px 3px, line by -2px -3px, hline to 5px, curve to left calc(100% - 8px) with left calc(100% - 3px), vline to 5px, curve to 5px top with left top ); } You can see the rounded corners being drawn there with literal curve commands. I think it’s neat. So again Trys’ shapes could be drawn with this once it has more proper browser support. I love how with this syntax we can mix and match units, we could abstract them out with custom properties, we could animate them, they accept readable position keywords like “right”, we can use calc(), and all this really nice native CSS stuff that path() wasn’t able to give us. This is born in a responsive web design world.
    Very nice win, web platform.
  3. By: Janus Atienza
    Mon, 21 Apr 2025 16:36:45 +0000

    Microsoft MS SQL server supports Linux operating systems, including Red Hat Enterprise Linux, Ubuntu, and container images on Virtual machine platforms like Kubernetes, Docker engine, and OpenShift. Regardless of the platform on which you are using SQL Server, the databases are prone to corruption and inconsistencies. If your MDF/NDF files on a Linux system get corrupted for any reason, you can repair them. In this post, we’ll discuss the procedure to repair and restore a corrupt SQL database on a Linux system.
    Causes of corruption in MDF/NDF files in Linux:
    The SQL database files stored in Linux system can get corrupted due to one of the following reasons:
    Sudden system shutdown. Bugs in the Server The system’s hard drive, where the database files are saved, has bad sectors. The operating system suddenly crashes at the time you are working on the database. Hardware or malware infection. The system runs out of space. Ways to repair and restore corrupt SQL databases in Linux
    To repair the corrupt SQL database file stored on Linux system, you can use SQL Server management studio on Ubuntu or Red hat enterprise itself Or use a professional SQL repair tool.
    Steps to repair a corrupt SQL database on a Linux system:
    First, launch the SQL Server on your Linux system by the below steps: Open the terminal by Ctrl+Alt+T or ALT +F2 Next, run the command below with the application name and then press the Enter key. sudo systemctl start mssql-server
    In SSMS, follow the below steps to restore and repair the database file on Linux system: Step 1- If you have an updated Backup file, you can use it to restore the corrupt Database. Here’s the command:
    BACKUP DATABASE [AdventureWorks2019] TO DISK = N’C:\backups\DBTesting.bak’ WITH DIFFERENTIAL, NOFORMAT, NOINIT, NAME = N’AdventureWorks2019-Full Database Backup’, SKIP, NOREWIND, NOUNLOAD, STATS = 10
    GO
    Step 2- If you have no backup, then, with Admin rights, run the DBCC CHECKDB command on SQL Server Management Studio (SSMS). Here the corrupted database name is “DBTesting”. Before using the command, first change the status to SET SINGLE_USER. Here is the command:
    ALTER DATABASE DBTesting SET SINGLE_USER
    DBCC CHECKDB (‘DBTesting’, REPAIR_REBUILD)
    GO
    If REPAIR_REBUILD tool fails to repair the problematic MDF file then you can try the below REPAIR_ALLOW_DATA_LOSS command of DBCC CHECKDB command: DBCC CHECKDB (N ’Dbtesting’, REPAIR_ALLOW_DATA_LOSS) WITH ALL_ERRORMSGS, NO_INFOMSGS;
    GO
    Next, change the mode of the database from SINGLE_USER to MULTI_USER by executing the below command: ALTER DATABASE DBTesting SET MULTI_USER
    Using the above command can help you repair corrupt MDF file but it may removes majority of the data pages containing inconsistent data while repairing. Due to which, you can lose your data.
    Step 3-Use a Professional SQL Repair tool:
    If you don’t want to risk data in database then install a professional MS SQL recovery tool such as Stellar Repair for MSSQL. The tool is equipped with enhanced algorithms that can help you repair corrupt or inconsistent MDF/NDF file even in Linux system. Here are the steps to install and launch Stellar Repair for MS SQL:
    First open Terminal on Linux/Ubuntu Next, run the below command: $ sudo apt install app_name  
    Here: Add the absolute path of the Stellar Repair for MSSQL tool.
    Next, launch the application in your Ubuntu using the below steps: On your desktop, find, and click In the Activities overview window, locate the Stellar Repair for MS SQL application and press the Enter key. Enter the system password to authenticate. Next, select the database in Stellar Repair for MS SQL’s user interface by clicking on Select Database. After selecting an MDF file, click Repair. For detailed steps you can read the KB To Conclude
    If you are working on an SQL Server installed on a Linux system on the Virtual machine, your system suddenly crashes and the MDF file gets corrupted. In this case or any other scenarios where the SQL database file become inaccessible on Linux system, you can repair it using the two methods described above. To repair corrupt MDF files quickly, without data loss and file size restrictions, you can use the help of a professional MS SQL Repair tool. The tool supports repairing MDF files in both Windows and Linux systems.
    The post Linux SQL Server Database Recovery: Restoring Corrupt Databases appeared first on Unixmen.
  4. by: Abhishek Kumar
    Sun, 20 Apr 2025 14:46:21 GMT

    Large Language Models (LLMs) are powerful, but they have one major limitation: they rely solely on the knowledge they were trained on.
    This means they lack real-time, domain-specific updates unless retrained, an expensive and impractical process. This is where Retrieval-Augmented Generation (RAG) comes in.
    RAG allows an LLM to retrieve relevant external knowledge before generating a response, effectively giving it access to fresh, contextual, and specific information.
    Imagine having an AI assistant that not only remembers general facts but can also refer to your PDFs, notes, or private data for more precise responses.
    This article takes a deep dive into how RAG works, how LLMs are trained, and how we can use Ollama and Langchain to implement a local RAG system that fine-tunes an LLM’s responses by embedding and retrieving external knowledge dynamically.
    By the end of this tutorial, we’ll build a PDF-based RAG project that allows users to upload documents and ask questions, with the model responding based on stored data.
    ✋I’m not an AI expert. This article is a hands-on look at Retrieval Augmented Generation (RAG) with Ollama and Langchain, meant for learning and experimentation. There might be mistakes, and if you spot something off or have better insights, feel free to share. It’s nowhere near the scale of how enterprises handle RAG, where they use massive datasets, specialized databases, and high-performance GPUs.What is Retrieval-Augmented Generation (RAG)?
    RAG is an AI framework that improves LLM responses by integrating real-time information retrieval.
    Instead of relying only on its training data, the LLM retrieves relevant documents from an external source (such as a vector database) before generating an answer.
    How RAG works
    Query Input – The user submits a question. Document Retrieval – A search algorithm fetches relevant text chunks from a vector store. Contextual Response Generation – The retrieved text is fed into the LLM, guiding it to produce a more accurate and relevant answer. Final Output – The response, now grounded in the retrieved knowledge, is returned to the user. Why use RAG instead of fine-tuning?
    No retraining required – Traditional fine-tuning demands a lot of GPU power and labeled datasets. RAG eliminates this need by retrieving data dynamically. Up-to-date knowledge – The model can refer to newly uploaded documents instead of relying on outdated training data. More accurate and domain-specific answers – Ideal for legal, medical, or research-related tasks where accuracy is crucial. How LLMs are trained (and why RAG improves them)
    Before diving into RAG, let’s understand how LLMs are trained:
    Pre-training – The model learns language patterns, facts, and reasoning from vast amounts of text (e.g., books, Wikipedia). Fine-tuning – It is further trained on specialized datasets for specific use cases (e.g., medical research, coding assistance). Inference – The trained model is deployed to answer user queries. While fine-tuning is helpful, it has limitations:
    It is computationally expensive. It does not allow dynamic updates to knowledge. It may introduce biases if trained on limited datasets. With RAG, we bypass these issues by allowing real-time retrieval from external sources, making LLMs far more adaptable.
    Building a local RAG application with Ollama and Langchain
    In this tutorial, we'll build a simple RAG-powered document retrieval app using LangChain, ChromaDB, and Ollama.
    The app lets users upload PDFs, embed them in a vector database, and query for relevant information.
    💡All the code is available in our GitHub repository. You can clone it and start testing right away.Installing dependencies
    To avoid messing up our system packages, we’ll first create a Python virtual environment. This keeps our dependencies isolated and prevents conflicts with system-wide Python packages.
    Navigate to your project directory and create a virtual environment:
    cd ~/RAG-Tutorial python3 -m venv venvNow, activate the virtual environment:
    source venv/bin/activateOnce activated, your terminal prompt should change to indicate that you are now inside the virtual environment.
    With the virtual environment activated, install the necessary Python packages using requirements.txt:
    pip install -r requirements.txtThis will install all the required dependencies for our RAG pipeline, including Flask, LangChain, Ollama, and Pydantic.
    Once installed, you’re all set to proceed with the next steps!
    Project structure
    Our project is structured as follows:
    RAG-Tutorial/ │── app.py # Main Flask server │── embed.py # Handles document embedding │── query.py # Handles querying the vector database │── get_vector_db.py # Manages ChromaDB instance │── .env # Stores environment variables │── requirements.txt # List of dependencies └── _temp/ # Temporary storage for uploaded filesStep 1: Creating app.py (Flask API Server)
    This script sets up a Flask server with two endpoints:
    /embed – Uploads a PDF and stores its embeddings in ChromaDB. /query – Accepts a user query and retrieves relevant text chunks from ChromaDB. route_embed(): Saves an uploaded file and embeds its contents in ChromaDB. route_query(): Accepts a query and retrieves relevant document chunks. import os from dotenv import load_dotenv from flask import Flask, request, jsonify from embed import embed from query import query from get_vector_db import get_vector_db load_dotenv() TEMP_FOLDER = os.getenv('TEMP_FOLDER', './_temp') os.makedirs(TEMP_FOLDER, exist_ok=True) app = Flask(__name__) @app.route('/embed', methods=['POST']) def route_embed(): if 'file' not in request.files: return jsonify({"error": "No file part"}), 400 file = request.files['file'] if file.filename == '': return jsonify({"error": "No selected file"}), 400 embedded = embed(file) return jsonify({"message": "File embedded successfully"}) if embedded else jsonify({"error": "Embedding failed"}), 400 @app.route('/query', methods=['POST']) def route_query(): data = request.get_json() response = query(data.get('query')) return jsonify({"message": response}) if response else jsonify({"error": "Query failed"}), 400 if __name__ == '__main__': app.run(host="0.0.0.0", port=8080, debug=True)Step 2: Creating embed.py (embedding documents)
    This file handles document processing, extracts text, and stores vector embeddings in ChromaDB.
    allowed_file(): Ensures only PDFs are processed. save_file(): Saves the uploaded file temporarily. load_and_split_data(): Uses UnstructuredPDFLoader and RecursiveCharacterTextSplitter to extract text and split it into manageable chunks. embed(): Converts text chunks into vector embeddings and stores them in ChromaDB. import os from datetime import datetime from werkzeug.utils import secure_filename from langchain_community.document_loaders import UnstructuredPDFLoader from langchain_text_splitters import RecursiveCharacterTextSplitter from get_vector_db import get_vector_db TEMP_FOLDER = os.getenv('TEMP_FOLDER', './_temp') def allowed_file(filename): return filename.lower().endswith('.pdf') def save_file(file): filename = f"{datetime.now().timestamp()}_{secure_filename(file.filename)}" file_path = os.path.join(TEMP_FOLDER, filename) file.save(file_path) return file_path def load_and_split_data(file_path): loader = UnstructuredPDFLoader(file_path=file_path) data = loader.load() text_splitter = RecursiveCharacterTextSplitter(chunk_size=7500, chunk_overlap=100) return text_splitter.split_documents(data) def embed(file): if file and allowed_file(file.filename): file_path = save_file(file) chunks = load_and_split_data(file_path) db = get_vector_db() db.add_documents(chunks) db.persist() os.remove(file_path) return True return FalseStep 3: Creating query.py (Query processing)
    It retrieves relevant information from ChromaDB and uses an LLM to generate responses.
    get_prompt(): Creates a structured prompt for multi-query retrieval. query(): Uses Ollama's LLM to rephrase the user query, retrieve relevant document chunks, and generate a response. import os from langchain_community.chat_models import ChatOllama from langchain.prompts import ChatPromptTemplate, PromptTemplate from langchain_core.output_parsers import StrOutputParser from langchain_core.runnables import RunnablePassthrough from langchain.retrievers.multi_query import MultiQueryRetriever from get_vector_db import get_vector_db LLM_MODEL = os.getenv('LLM_MODEL') OLLAMA_HOST = os.getenv('OLLAMA_HOST', 'http://localhost:11434') def get_prompt(): QUERY_PROMPT = PromptTemplate( input_variables=["question"], template="""You are an AI assistant. Generate five reworded versions of the user question to improve document retrieval. Original question: {question}""", ) template = "Answer the question based ONLY on this context:\n{context}\nQuestion: {question}" prompt = ChatPromptTemplate.from_template(template) return QUERY_PROMPT, prompt def query(input): if input: llm = ChatOllama(model=LLM_MODEL) db = get_vector_db() QUERY_PROMPT, prompt = get_prompt() retriever = MultiQueryRetriever.from_llm(db.as_retriever(), llm, prompt=QUERY_PROMPT) chain = ({"context": retriever, "question": RunnablePassthrough()} | prompt | llm | StrOutputParser()) return chain.invoke(input) return NoneStep 4: Creating get_vector_db.py (Vector database management)
    It initializes and manages ChromaDB, which stores text embeddings for fast retrieval.
    get_vector_db(): Initializes ChromaDB with the Nomic embedding model and loads stored document vectors. import os from langchain_community.embeddings import OllamaEmbeddings from langchain_community.vectorstores.chroma import Chroma CHROMA_PATH = os.getenv('CHROMA_PATH', 'chroma') COLLECTION_NAME = os.getenv('COLLECTION_NAME') TEXT_EMBEDDING_MODEL = os.getenv('TEXT_EMBEDDING_MODEL') OLLAMA_HOST = os.getenv('OLLAMA_HOST', 'http://localhost:11434') def get_vector_db(): embedding = OllamaEmbeddings(model=TEXT_EMBEDDING_MODEL, show_progress=True) return Chroma(collection_name=COLLECTION_NAME, persist_directory=CHROMA_PATH, embedding_function=embedding)Step 5: Environment variables
    Create .env, to store environment variables:
    TEMP_FOLDER = './_temp' CHROMA_PATH = 'chroma' COLLECTION_NAME = 'rag-tutorial' LLM_MODEL = 'smollm:360m' TEXT_EMBEDDING_MODEL = 'nomic-embed-text' TEMP_FOLDER: Stores uploaded PDFs temporarily. CHROMA_PATH: Defines the storage location for ChromaDB. COLLECTION_NAME: Sets the ChromaDB collection name. LLM_MODEL: Specifies the LLM model used for querying. TEXT_EMBEDDING_MODEL: Defines the embedding model for vector storage. I'm using these light weight LLMs for this tutorial, as I don't have dedicated GPU to inference large models. | You can edit your LLMs in the .env fileTesting the makeshift RAG + LLM Pipeline
    Now that our RAG app is set up, we need to validate its effectiveness. The goal is to ensure that the system correctly:
    Embeds documents – Converts text into vector embeddings and stores them in ChromaDB. Retrieves relevant chunks – Fetches the most relevant text snippets from ChromaDB based on a query. Generates meaningful responses – Uses Ollama to construct an intelligent response based on retrieved data. This testing phase ensures that our makeshift RAG pipeline is functioning as expected and can be fine-tuned if necessary.
    Running the flask server
    We first need to make sure our Flask app is running. Open a terminal, navigate to your project directory, and activate your virtual environment:
    cd ~/RAG-Tutorial source venv/bin/activate # On Linux/macOS # or venv\Scripts\activate # On Windows (if using venv) Now, run the Flask app:
    python3 app.pyIf everything is set up correctly, the server should start and listen on http://localhost:8080. You should see output like:
    Once the server is running, we'll use curl commands to interact with our pipeline and analyze the responses to confirm everything works as expected.
    1. Testing Document Embedding
    The first step is to upload a document and ensure its contents are successfully embedded into ChromaDB.
    curl --request POST \ --url http://localhost:8080/embed \ --header 'Content-Type: multipart/form-data' \ --form file=@/path/to/file.pdfBreakdown:
    curl --request POST → Sends a POST request to our API. --url http://localhost:8080/embed → Targets our embed endpoint running on port 8080. --header 'Content-Type: multipart/form-data' → Specifies that we are uploading a file. --form file=@/path/to/file.pdf → Attaches a file (in this case, a PDF) to be processed. Expected Response:
    What’s Happening Internally?
    The server reads the uploaded PDF file. The text is extracted, split into chunks, and converted into vector embeddings. These embeddings are stored in ChromaDB for future retrieval. If Something Goes Wrong:
    IssuePossible CauseFix"status": "error"File not found or unreadableCheck the file path and permissionscollection.count() == 0ChromaDB storage failureRestart ChromaDB and check logs 2. Querying the Document
    Now that our document is embedded, we can test whether relevant information is retrieved when we ask a question.
    curl --request POST \ --url http://localhost:8080/query \ --header 'Content-Type: application/json' \ --data '{ "query": "Question about the PDF?" }'Breakdown:
    curl --request POST → Sends a POST request. --url http://localhost:8080/query → Targets our query endpoint. --header 'Content-Type: application/json' → Specifies that we are sending JSON data. --data '{ "query": "Question about the PDF?" }' → Sends our search query to retrieve relevant information. Expected Response:
    What’s Happening Internally?
    The query "Whats in this file?" is passed to ChromaDB to retrieve the most relevant chunks. The retrieved chunks are passed to Ollama as context for generating a response. Ollama formulates a meaningful reply based on the retrieved information. If the Response is Not Good Enough:
    IssuePossible CauseFixRetrieved chunks are irrelevantPoor chunking strategyAdjust chunk sizes and retry embedding"llm_response": "I don't know"Context wasn't passed properlyCheck if ChromaDB is returning resultsResponse lacks document detailsLLM needs better instructionsModify the system prompt 3. Fine-tuning the LLM for better responses
    If Ollama’s responses aren’t detailed enough, we need to refine how we provide context.
    Tuning strategies:
    Improve Chunking – Ensure text chunks are large enough to retain meaning but small enough for effective retrieval. Enhance Retrieval – Increase n_results to fetch more relevant document chunks. Modify the LLM Prompt – Add structured instructions for better responses. Example system prompt for Ollama:
    prompt = f""" You are an AI assistant helping users retrieve information from documents. Use the following document snippets to provide a helpful answer. If the answer isn't in the retrieved text, say 'I don't know.' Retrieved context: {retrieved_chunks} User's question: {query_text} """ This ensures that Ollama:
    Uses retrieved text properly. Avoids hallucinations by sticking to available context. Provides meaningful, structured answers. Final thoughts
    Building this makeshift RAG LLM tuning pipeline has been an insightful experience, but I want to be clear, I’m not an AI expert. Everything here is something I’m still learning myself.
    There are bound to be mistakes, inefficiencies, and things that could be improved. If you’re someone who knows better or if I’ve missed any crucial points, please feel free to share your insights.
    That said, this project gave me a small glimpse into how RAG works. At its core, RAG is about fetching the right context before asking an LLM to generate a response.
    It’s what makes AI chatbots capable of retrieving information from vast datasets instead of just responding based on their training data.
    Large companies use this technique at scale, processing massive amounts of data, fine-tuning their models, and optimizing their retrieval mechanisms to build AI assistants that feel intuitive and knowledgeable.
    What we built here is nowhere near that level, but it was still fascinating to see how we can direct an LLM’s responses by controlling what information it retrieves.
    Even with this basic setup, we saw how much impact retrieval quality, chunking strategies, and prompt design have on the final response.
    This makes me wonder, have you ever thought about training your own LLM? Would you be interested in something like this but fine-tuned specifically for Linux tutorials?
    Imagine a custom-tuned LLM that could answer your Linux questions with accurate, RAG-powered responses, would you use it? Let us know in the comments!
  5. by: Ojekudo Oghenemaro Emmanuel
    Sun, 20 Apr 2025 08:04:07 GMT

    Introduction
    In today’s digital world, security is paramount, especially when dealing with sensitive data like user authentication and financial transactions. One of the most effective ways to enhance security is by implementing One-Time Password (OTP) authentication. This article explores how to implement OTP authentication in a Laravel backend with a Vue.js frontend, ensuring secure transactions.
    Why Use OTP Authentication?
    OTP authentication provides an extra layer of security beyond traditional username and password authentication. Some key benefits include:
    Prevention of Unauthorized Access: Even if login credentials are compromised, an attacker cannot log in without the OTP. Enhanced Security for Transactions: OTPs can be used to confirm high-value transactions, preventing fraud. Temporary Validity: Since OTPs expire after a short period, they reduce the risk of reuse by attackers. Prerequisites
    Before getting started, ensure you have the following:
    Laravel 8 or later installed Vue.js configured in your project A mail or SMS service provider for sending OTPs (e.g., Twilio, Mailtrap) Basic understanding of Laravel and Vue.js In this guide, we’ll implement OTP authentication in a Laravel (backend) and Vue.js (frontend) application. We’ll cover:
    Setting up Laravel and Vue (frontend) from scratch Setting up OTP generation and validation in Laravel Creating a Vue.js component for OTP input Integrating OTP authentication into login workflows Enhancing security with best practices By the end, you’ll have a fully functional OTP authentication system ready to enhance the security of your fintech or web application.
    Setting Up Laravel for OTP Authentication
    Step 1: Install Laravel and Required Packages
    If you haven't already set up a Laravel project, create a new one:
    composer create-project "laravel/laravel:^10.0" example-app Next, install the Laravel Breeze package for frontend scaffolding:
    composer require laravel/breeze --dev After composer has finished installing, run the following command to select the framework you want to use—the Vue configuration:
    php artisan breeze:install You’ll see a prompt with the available stacks:
    Which Breeze stack would you like to install? - Vue with Inertia Would you like any optional features? - None Which testing framework do you prefer? - PHPUnit Breeze will automatically install the necessary packages for your Laravel Vue project. You should see:
    INFO Breeze scaffolding installed successfully. Now run the npm command to build your frontend assets:
    npm run dev Then, open another terminal and launch your Laravel app:
    php artisan serve Step 2: Setting up OTP generation and validation in Laravel
    We'll use a mail testing platform called Mailtrap to send and receive mail locally. If you don’t have a mail testing service set up, sign up at Mailtrap to get your SMTP credentials and add them to your .env file:
    MAIL_MAILER=smtp MAIL_HOST=sandbox.smtp.mailtrap.io MAIL_PORT=2525 MAIL_USERNAME=1780944422200a MAIL_PASSWORD=a8250ee453323b MAIL_ENCRYPTION=tls MAIL_FROM_ADDRESS=hello@example.com MAIL_FROM_NAME="${APP_NAME}" To send OTPs to users, we’ll use Laravel’s built-in mail services. Create a mail class and controller:
    php artisan make:mail OtpMail php artisan make:controller OtpController Then modify the OtpMail class:
    <?php namespace App\Mail; use Illuminate\Bus\Queueable; use Illuminate\Contracts\Queue\ShouldQueue; use Illuminate\Mail\Mailable; use Illuminate\Mail\Mailables\Content; use Illuminate\Mail\Mailables\Envelope; use Illuminate\Queue\SerializesModels; class OtpMail extends Mailable { use Queueable, SerializesModels; public $otp; /** * Create a new message instance. */ public function __construct($otp) { $this->otp = $otp; } /** * Build the email message. */ public function build() { return $this->subject('Your OTP Code') ->view('emails.otp') ->with(['otp' => $this->otp]); } /** * Get the message envelope. */ public function envelope(): Envelope { return new Envelope( subject: 'OTP Mail', ); } } Create a Blade view in resources/views/emails/otp.blade.php:
    <!DOCTYPE html> <html> <head> <title>Your OTP Code</title> </head> <body> <p>Hello,</p> <p>Your One-Time Password (OTP) is: <strong>{{ $otp }}</strong></p> <p>This code is valid for 10 minutes. Do not share it with anyone.</p> <p>Thank you!</p> </body> </html> Step 3: Creating a Vue.js component for OTP input
    Normally, after login or registration, users are redirected to the dashboard. In this tutorial, we add an extra security step that validates users with an OTP before granting dashboard access.
    Create two Vue files:
    Request.vue: requests the OTP Verify.vue: inputs the OTP for verification Now we create the routes for the purpose of return the View and the functionality of creating OTP codes, storing OTP codes, sending OTP codes through the mail class, we head to our web.php file:
    Route::middleware('auth')->group(function () { Route::get('/request', [OtpController::class, 'create'])->name('request'); Route::post('/store-request', [OtpController::class, 'store'])->name('send.otp.request'); Route::get('/verify', [OtpController::class, 'verify'])->name('verify'); Route::post('/verify-request', [OtpController::class, 'verify_request'])->name('verify.otp.request'); }); Putting all of this code in the OTP controller returns the View for our request.vue and verify.vue file and the functionality of creating OTP codes, storing OTP codes, sending OTP codes through the mail class and verifying OTP codes, we head to our web.php file to set up the routes.
    public function create(Request $request) { return Inertia::render('Request', [ 'email' => $request->query('email', ''), ]); } public function store(Request $request) { $request->validate([ 'email' => 'required|email|exists:users,email', ]); $otp = rand(100000, 999999); Cache::put('otp_' . $request->email, $otp, now()->addMinutes(10)); Log::info("OTP generated for " . $request->email . ": " . $otp); Mail::to($request->email)->send(new OtpMail($otp)); return redirect()->route('verify', ['email' => $request->email]); } public function verify(Request $request) { return Inertia::render('Verify', [ 'email' => $request->query('email'), ]); } public function verify_request(Request $request) { $request->validate([ 'email' => 'required|email|exists:users,email', 'otp' => 'required|digits:6', ]); $cachedOtp = Cache::get('otp_' . $request->email); Log::info("OTP entered: " . $request->otp); Log::info("OTP stored in cache: " . ($cachedOtp ?? 'No OTP found')); if (!$cachedOtp) { return back()->withErrors(['otp' => 'OTP has expired. Please request a new one.']); } if ((string) $cachedOtp !== (string) $request->otp) { return back()->withErrors(['otp' => 'Invalid OTP. Please try again.']); } Cache::forget('otp_' . $request->email); $user = User::where('email', $request->email)->first(); if ($user) { $user->email_verified_at = now(); $user->save(); } return redirect()->route('dashboard')->with('success', 'OTP Verified Successfully!'); } Having set all this code, we return to the request.vue file to set it up.
    <script setup> import AuthenticatedLayout from '@/Layouts/AuthenticatedLayout.vue'; import InputError from '@/Components/InputError.vue'; import InputLabel from '@/Components/InputLabel.vue'; import PrimaryButton from '@/Components/PrimaryButton.vue'; import TextInput from '@/Components/TextInput.vue'; import { Head, useForm } from '@inertiajs/vue3'; const props = defineProps({ email: { type: String, required: true, }, }); const form = useForm({ email: props.email, }); const submit = () => { form.post(route('send.otp.request'), { onSuccess: () => { alert("OTP has been sent to your email!"); form.get(route('verify'), { email: form.email }); // Redirecting to OTP verification }, }); }; </script> <template> <Head title="Request OTP" /> <AuthenticatedLayout> <form @submit.prevent="submit"> <div> <InputLabel for="email" value="Email" /> <TextInput id="email" type="email" class="mt-1 block w-full" v-model="form.email" required autofocus /> <InputError class="mt-2" :message="form.errors.email" /> </div> <div class="mt-4 flex items-center justify-end"> <PrimaryButton :class="{ 'opacity-25': form.processing }" :disabled="form.processing"> Request OTP </PrimaryButton> </div> </form> </AuthenticatedLayout> </template> Having set all this code, we return to the verify.vue to set it up:
    <script setup> import AuthenticatedLayout from '@/Layouts/AuthenticatedLayout.vue'; import InputError from '@/Components/InputError.vue'; import InputLabel from '@/Components/InputLabel.vue'; import PrimaryButton from '@/Components/PrimaryButton.vue'; import TextInput from '@/Components/TextInput.vue'; import { Head, useForm, usePage } from '@inertiajs/vue3'; const page = usePage(); // Get the email from the URL query params const email = page.props.email || ''; // Initialize form with email and OTP field const form = useForm({ email: email, otp: '', }); // Submit function const submit = () => { form.post(route('verify.otp.request'), { onSuccess: () => { alert("OTP verified successfully! Redirecting..."); window.location.href = '/dashboard'; // Change to your desired redirect page }, onError: () => { alert("Invalid OTP. Please try again."); }, }); }; </script> <template> <Head title="Verify OTP" /> <AuthenticatedLayout> <form @submit.prevent="submit"> <div> <InputLabel for="otp" value="Enter OTP" /> <TextInput id="otp" type="text" class="mt-1 block w-full" v-model="form.otp" required /> <InputError class="mt-2" :message="form.errors.otp" /> </div> <div class="mt-4 flex items-center justify-end"> <PrimaryButton :disabled="form.processing"> Verify OTP </PrimaryButton> </div> </form> </AuthenticatedLayout> </template> Step 4: Integrating OTP authentication into login and register workflows
    Update the login controller:
    public function store(LoginRequest $request): RedirectResponse { $request->authenticate(); $request->session()->regenerate(); return redirect()->intended(route('request', absolute: false)); } Update the registration controller:
    public function store(Request $request): RedirectResponse { $request->validate([ 'name' => 'required|string|max:255', 'email' => 'required|string|lowercase|email|max:255|unique:' . User::class, 'password' => ['required', 'confirmed', Rules\Password::defaults()], ]); $user = User::create([ 'name' => $request->name, 'email' => $request->email, 'password' => Hash::make($request->password), ]); event(new Registered($user)); Auth::login($user); return redirect(route('request', absolute: false)); } Conclusion
    Implementing OTP authentication in Laravel and Vue.js enhances security for user logins and transactions. By generating, sending, and verifying OTPs, we can add an extra layer of protection against unauthorized access. This method is particularly useful for financial applications and sensitive user data.
  6. by: LHB Community
    Sun, 20 Apr 2025 12:23:45 +0530

    As a developer, efficiency is key. Being a full-stack developer myself, I’ve always thought of replacing boring tasks with automation.
    What could happen if I just keep writing new code in a Python file, and it gets evaluated every time I save it? Isn’t that a productivity boost?
    'Hot Reload' is that valuable feature of the modern development process that automatically reloads or refreshes the code after you make changes to a file. This helps the developers see the effect of their changes instantly and avoid manually restarting or refreshing the browser.
    Over these years, I’ve used tools like entr to keep docker containers on the sync every time I modify docker-compose.yml file or keep testing with different CSS designs on the fly with browser-sync. 
    1. entr
    entr (Event Notify Test Runner) is a lightweight command line tool for monitoring file changes and triggering specified commands. It’s one of my favorite tools to restart any CLI process, whether it be triggering a docker build or restarting a python script or keep rebuilding the C project.
    For developers who are used to the command line, entr provides a simple and efficient way to perform tasks such as building, testing, or restarting services in real time.
    Key Features
    Lightweight, no additional dependencies. Highly customizable Ideal for use in conjunction with scripts or build tools. Linux only. Installation
    All you have to do is type in the following command in the terminal:
    sudo apt install -y entrUsage
    Auto-trigger build tools: Use entr to automatically execute build commands like make, webpack, etc. Here's the command I use to do that:
    ls docker-compose.yml | entr -r docker buildHere, -r flag reloads the child process, which is the run command ‘docker build’.
    0:00 /0:23 1× Automatically run tests: Automatically re-run unit tests or integration tests after modifying the code.
    ls *.ts | entr bun test2. nodemon
    nodemon is an essential tool for developers working on Node.js applications. It automatically monitors changes to project files and restarts the Node.js server when files are modified, eliminating the need for developers from restarting the server manually.
    Key Features
    Monitor file changes and restart Node.js server automatically. Supports JavaScript and TypeScript projects Customize which files and directories to monitor. Supports common web frameworks such as Express, Hapi. Installation
    You can type in a single command in the terminal to install the tool:
    npm install -g nodemonIf you are installing Node.js and npm for the first on Ubuntu-based distributions. You can follow our Node.js installation tutorial.
    Usage
    When you type in the following command, it starts server.js and will automatically restart the server if the file changes.
    nodemon server.js3. LiveReload.net
    LiveReload.net is a very popular tool, especially for front-end developers. It automatically refreshes the browser after you save a file, helping developers see the effect of changes immediately, eliminating the need to manually refresh the browser.
    Unlike others, it is a web–based tool, and you need to head to its official website to get started. Every file remains in your local network. No files are uploaded to a third-party server.
    Key Features
    Seamless integration with editors Supports custom trigger conditions to refresh the page Good compatibility with front-end frameworks and static websites. Usage
    It's stupidly simple. Just load up the website, and drag and drop your folder to start making live changes. 
    4. fswatch
    fswatch is a cross-platform file change monitoring tool for Linux, macOS, and developers using it on Windows via WSL (Windows Subsystem Linux). It is powerful enough to monitor multiple files and directories for changes and perform actions accordingly.
    Key Features
    Supports cross-platform operation and can be used on Linux and macOS. It can be used with custom scripts to trigger multiple operations. Flexible configuration options to filter specific types of file changes. Installation
    To install it on a Linux distribution, type in the following in the terminal:
    sudo apt install -y fswatchIf you have a macOS computer, you can use the command:
    brew install fswatchUsage
    You can try typing in the command here:
    fswatch -o . | xargs -n1 -I{} makeAnd, then you can chain this command with an entr command for a rich interactive development experience.
    ls hellomake | entr -r ./hellomakeThe “fswatch” command will invoke make to compile the c application, and then if our binary “hellomake” is modified, we’ll run it again. Isn’t this a time saver? 
    5. Watchexec
    Watchexec is a cross-platform command line tool for automating the execution of specified commands when a file or directory changes. It is a lightweight file monitor that helps developers automate tasks such as running tests, compiling code, or reloading services when a source code file changes. 
      Key Features
    Support cross-platform use (macOS, Linux, Windows). Fast, written in Rust. Lightweight, no complex configuration. Installation
    On Linux, just type in:
    sudo apt install watchexecAnd, if you want to try it on macOS (via homebrew):
    brew install watchexecYou can also download corresponding binaries for your system from the project’s Github releases section.
    Usage
    All you need to do is just run the command:
    watchexec -e py "pytest"This will run pytests every time a Python file in the current directory is modified.
    6. BrowserSync
    BrowserSync is a powerful tool that not only monitors file changes, but also synchronizes pages across multiple devices and browsers. BrowserSync can be ideal for developers who need to perform cross-device testing.
    Key features
    Cross-browser synchronization. Automatically refreshes multiple devices and browsers. Built-in local development server. Installation
    Considering you have Node.js installed first, type in the following command:
    npm i -g browser-syncOr, you can use:
    npx browser-syncUsage
    Here is how the commands for it would look like:
    browser-sync start --server --files "/*.css, *.js, *.html" npx browser-sync start --server --files "/*.css, *.js, *.html"You can use either of the two commands for your experiments.
    This command starts a local server and monitors the CSS, JS, and HTML files for changes, and the browser is automatically refreshed as soon as a change occurs. If you’re a developer and aren't using any modern frontend framework, this comes handy.
    7. watchdog & watchmedo
    Watchdog is a file system monitoring library written in Python that allows you to monitor file and directory changes in real time. Whether it's file creation, modification, deletion, or file move, Watchdog can help you catch these events and trigger the appropriate action.
    Key Features
    Cross-platform support Provides full flexibility with its Python-based API Includes watchmedo script to hook any CLI application easily Installation
    Install Python first, and then install with pip using the command below:
    pip install watchdogUsage
    Type in the following and watch it in action:
    watchmedo shell-command --patterns="*.py" --recursive --command="python factorial.py" .This command watches a directory for file changes and prints out the event details whenever a file is modified, created, or deleted.
    In the command, --patterns="*.py" watches .py files, --recursive watches subdirectories and --command="python factorial.py" run the python file.
    Conclusion
    Hot reloading tools have become increasingly important in the development process, and they can help developers save a lot of time and effort and increase productivity. With tools like entr, nodemon, LiveReload, Watchexec, Browser Sync, and others, you can easily automate reloading and live feedback without having to manually restart the server or refresh the browser.
    Integrating these tools into your development process can drastically reduce repetitive work and waiting time, allowing you to focus on writing high-quality code.
    Whether you're developing a front-end application or a back-end service or managing a complex project, using these hot-reloading tools will enhance your productivity.
    Bhuwan Mishra is a Fullstack developer, with Python and Go as his tools of choice. He takes pride in building and securing web applications, APIs, and CI/CD pipelines, as well as tuning servers for optimal performance. He also has a passion for working with Kubernetes.
  7. by: Sreenath
    Sat, 19 Apr 2025 13:00:24 GMT

    Simply creating well-formatted notes isn’t enough to manage the information you collect in daily life—accessibility is key.
    If you can't easily retrieve that information and its context, the whole point of "knowledge management" falls apart.
    From my experience using it daily for several months, I’d say Logseq does a better job of interlinking notes than any other app I’ve tried.
    So, without further ado, let’s dive in.
    The concept of page, links, and tags
    If you’ve used Logseq before, you’ve likely noticed one key thing: everything is a block. Your data is structured as intentional, individual blocks. When you type a sentence and hit Enter, instead of just creating a new line, Logseq starts a new bullet point.
    This design brings both clarity and complexity.
    In Logseq, pages are made up of bullet-formatted text. Each page acts like a link—and when you search for a page that doesn’t exist, Logseq simply creates it for you.
    Here’s the core idea: pages and tags function in a very similar way. You can think of a tag as a special kind of page that collects links to all content marked with that tag. For a deeper dive into this concept, I recommend checking out this forum post.
    Logseq also supports block references, which let you link directly to any specific block—meaning you can reference a single sentence from one note in another.
    📋Ultimately, it is the end-user's creativity that creates a perfect content organization. There is no one way of using Logseq for knowledge management. It's up to you how you use it.Creating a new page in Logseq
    Click on the top-left search icon. This will bring a search overlay. Here, enter the name of the page you want to create.
    If no such page is present, you will get an option to create a new page.
    Search for a noteFor example, I created a page called "My Logseq Notes" and you can see this newly created page in 'All pages' tab on Logseq sidebar.
    New page listed in "All Pages" tabLogseq stores all the created page in the pages directory inside the Logseq folder you have chosen on your system.
    The Logseq pages directory in File ManagerThere won't be any nested directories to store sub-pages. All those things will be done using links and tags. In fact, there is no point to look into the Logseq directory manually. Use the app interface, where the data will appear organized.
    ⌨️ Use keyboard shortcut for creating pages
    Powerful tools like Logseq are better used with keyboard. You can create pages/links/references using only keyboard, without touching the mouse.
    The common syntax to create a page or link in Logseq is:
    #One-word-page-nameYou can press the # symbol and enter a one word name. If there are no pages with the name exists, a new page is created. Else, link to the mentioned page is added.
    If you need to create a page with multiple words, use:
    #[[Page with multiple words separated with space]]Place the name of the note within two [[]] symbol.
    0:00 /0:32 1× Create pages with single word name or multi-word names.
    Using Tags
    In the example above, I have created two pages, one without spaces in the name, while the other has spaces.
    Both of them can be considered as tags.
    Confused? The further interlinking of these pages actually defines if it's a page or a tag.
    If you are using it as a 'special page' to accumulate similar contents, then it can be considered as a tag. If you are filling paragraphs of text inside it, then it will be a regular page.
    Basically, a tag-page is also a page but it has the links to all the pages marked with the said tag.
    To add a tag to a particular note, you can type #<tag-name> anywhere in the note. For convenience and better organization, you can add at the end of the note.
    Adding Simple TagsLinking to a page
    Creating a new page and adding a link to an existing page is the same process in Logseq. You have seen it above.
    If you press the [[]] and type a name, if that name already exists, a link to that page is created. Else, a new page is created.
    In the short video below, you can see the process of linking a note in another note.
    0:00 /0:22 1× Adding link to a page in Logseq in another note.
    Desktop Linux is mostly neglected by the industry but loved by the community. For the past 12 years, It's FOSS has been helping people use Linux on their personal computers. And we are now facing the existential threat from AI models stealing our content.
    If you like what we do and would love to support our work, please become It's FOSS Plus member. It costs $24 a year (less than the cost of a burger meal each month) and you get an ad-free reading experience with the satisfaction of helping the desktop Linux community.
    Join It's FOSS Plus Referencing a block
    The main flexibility of Logseq lies in the linking of individual blocks. In each note, you have a parent node, then child nodes and grand-child nodes. These are distinguished by the indentation it has.
    So, in the case of block referencing, you should take utmost care in properly adding indent to the note blocks.
    Now, type ((. A search box will appear above the cursor. Start typing something, and it will highlight the matching block anywhere in Logseq.
    0:00 /0:29 1× Referencing a block inside a note. The block we are adding is part of another note.
    Similarly, you can right-click on a node and select "Copy block ref" to copy the reference code for that block.
    Copy Block ReferenceNow, if you paste this on other note, the main node content is pasted and the rest of that block (intended contents) will be visible on hover.
    Hover over reference for preview💡Instead of the "Copy block ref", you can also choose "Copy block embed" and then paste the embed code. This will paste the whole block in the area where you pasted the embed code.Using the block referencing and Markdown links
    Once you have the block reference code, you can use it as a URL to link to a particular word, instead of pasting raw in a line. To do that, use the Markdown link syntax:
    [This is a link to the block](reference code of the block)For example:
    [This is a link to the block](((679b6c26-2ce9-48f2-be6a-491935b314a6)))So, when you hover over the text, the referenced content is previewed.
    Reference as Markdown HyperlinkNow that you have the basic building blocks, you can start organizing your notes into a proper knowledge base.
    In the next tutorial of this series, I'll discuss how you can use plugins and themes to customize Logseq.
  8. by: LHB Community
    Sat, 19 Apr 2025 15:59:35 +0530

    As a Kubernetes engineer, I deal with kubectl almost every day. Pod status, service list, CrashLoopBackOff location, YAML configuration comparison, log view...... are almost daily operations!
    But to be honest, in the process of cutting namespaces, manually copying pod names, and scrolling the log again and again, I gradually felt burned out. That is, until I came across KubeTUI — a little tool that made me feel like “getting back on my feet”.
    What is KubeTUI
    KubeTUI, known as Kubernetes Terminal User Interface, is a Kubernetes visual dashboard that can be used in the terminal. It's not like the traditional kubectl, which lets you memorize and knock out commands, or the Kubernetes Dashboard, which requires a browser, Ingress, and a token to log in to a bunch of configurations.
    In a nutshell, it's a tool that lets you happily browse the state of your Kubernetes cluster from your terminal.
    Installing KubeTUI
    KubeTUI is written in Rust, and you can download its binary releases from Github. Once you do that, you need to set up a Kubernetes environment to build and monitor your application.
    Let me show you how that is done, with an example of building a WordPress application.
    Setting up the Kubernetes environment
    We’ll use K3s to spin up a Kubernetes environment. The steps are mentioned below.
    Step 1: Install k3s and run
    curl -sfL https://get.k3s.io | sh -With this single command, k3s will start itself after installation. At later times, you can use the below command to start k3s server. 
    sudo k3s server --write-kubeconfig-mode='644'Here’s a quick explanation of what the command includes :
    k3s server: It starts the K3s server component, which is the core of the Kubernetes control plane. --write-kubeconfig-mode='644': It ensures that the generated kubeconfig file has permissions that allow the owner to read and write it, and the group and others to only read it. If you start the server without this flag, you need to use sudo for all k3s commands. Step 2: Check available nodes via kubectl
    We need to verify if Kubernetes control plane is actually working before we can make any deployments. You can use the command below to check that:
    k3s kubectl get nodeStep 3: Deploy WordPress using Helm chart (Sample Application)
    K3s provides helm integration, which helps manage the Kubernetes application. Simply apply this YAML manifest to spin up WordPress in Kubernetes environment from Bitnami helm chart.
    Create a file named wordpress.yaml with the contents:
    Content MissingYou can then apply the configuration file to the application using the command:
    k3s kubectl apply -f wordpress.yamlIt will take around 2–3 minutes for the whole setup to complete.
    Step 4: Launch KubeTUI
    To KubeTUI, type in the following command in the terminal.
    kubetuiHere's what you will see. There are no pods in the default namespace. Let’s switch namespace to wpdev we created earlier by hitting “n”.
    How to Use KubeTui
    To navigate to different tabs, like switching screens from Pod to Config and Network, you can click with your mouse or press the corresponding number as shown:
    You can also switch tabs with the keyboard:
    If you need help with Kubetui at any time, press ? to see all the available options.
    It integrates a vim-like search mode. To activate search mode, enter /.
    Tip for Log filtering 
    I discovered an interesting feature to filter logs from multiple Kubernetes resources. For example, say we want to target logs from all pods with names containing WordPress. It will combine logs from both of these pods. We can use the query:
    pod:wordpressYou can target different resource types like svc, jobs, deploy, statefulsets, replicasets with the log filtering in place. Instead of combining logs, if you want to remove some pods or container logs, you can achieve it with !pod:pod-to-exclude and !container:container-to-exclude filters.
    Conclusion
    Working with Kubernetes involves switching between different namespaces, pods, networks, configs, and services. KubeTUI can be a valuable asset in managing and troubleshooting Kubernetes environment. 
    I find myself more productive using tools like KubeTUI. Share your thoughts on what tools you’re utilizing these days to make your Kubernetes journey smoother.
    Bhuwan Mishra is a Fullstack developer, with Python and Go as his tools of choice. He takes pride in building and securing web applications, APIs, and CI/CD pipelines, as well as tuning servers for optimal performance. He also has a passion for working with Kubernetes.
  9. by: Geoff Graham
    Fri, 18 Apr 2025 12:12:35 +0000

    Hey, did you see the post Jen Simmons published about WebKit’s text-wrap: pretty implementation? It was added to Safari Technology Preview and can be tested now, as in, like, today. Slap this in a stylesheet and your paragraphs get a nice little makeover that improves the ragging, reduces hyphenation, eliminates typographic orphans at the end of the last line, and generally avoids large typographic rivers as a result. The first visual in the post tells the full story, showing how each of these is handled.
    Credit: WebKit Blog That’s a lot of heavy lifting for a single value! And according to Jen, this is vastly different from Chromium’s implementation of the exact same feature.
    Jen suggests that performance concerns are the reason for the difference. It does sound like the pretty value does a lot of work, and you might imagine that would have a cumulative effect when we’re talking about long-form content where we’re handling hundreds, if not thousands, of lines of text. If it sounds like Safari cares less about performance, that’s not the case, as their approach is capable of handling the load.
    Great, carry on! But now you know that two major browsers have competing implementations of the same feature. I’ve been unclear on the terminology of pretty since it was specced, and now it truly seems that what is considered “pretty” really is in the eye of the beholder. And if you’re hoping to choose a side, don’t, because the specification is intentionally unopinionated in this situation, as it says (emphasis added):
    So, there you have it. One new feature. Two different approaches. Enjoy your new typographic powers. 💪
    “Pretty” is in the eye of the beholder originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  10. by: Zell Liew
    Thu, 17 Apr 2025 12:38:05 +0000

    There was once upon a time when native CSS lacked many essential features, leaving developers to come up with all sorts of ways to make CSS easier to write over the years.
    These ways can mostly be categorized into two groups:
    Pre-processors Post-processors Pre-processors include tools like Sass, Less, and Stylus. Like what the category’s name suggests, these tools let you write CSS in their syntax before compiling your code into valid CSS.
    Post-processors work the other way — you write non-valid CSS syntax into a CSS file, then post-processors will change those values into valid CSS.
    There are two major post-processors today:
    PostCSS LightningCSS PostCSS is the largest kid on the block while Lightning CSS is a new and noteworthy one. We’ll talk about them both in a bit.
    I think post-processors have won the compiling game
    Post-processors have always been on the verge of winning since PostCSS has always been a necessary tool in the toolchain.
    The most obvious (and most useful) PostCSS plugin for a long time is Autoprefixer — it creates vendor prefixes for you so you don’t have to deal with them.
    /* Input */ .selector { transform: /* ... */; } .selector { -webkit-transform: /* ... */; transform: /* ... */; } Arguably, we don’t need Autoprefixer much today because browsers are more interopable, but nobody wants to go without Autoprefixer because it eliminates our worries about vendor prefixing.
    What has really tilted the balance towards post-processors includes:
    Native CSS gaining essential features Tailwind removing support for pre-processors Lightning CSS Let me expand on each of these.
    Native CSS gaining essential features
    CSS pre-processors existed in the first place because native CSS lacked features that were critical for most developers, including:
    CSS variables Nesting capabilities Allowing users to break CSS into multiple files without additional fetch requests Conditionals like if and for Mixins and functions Native CSS has progressed a lot over the years. It has gained great browser support for the first two features:
    CSS Variables Nesting With just these two features, I suspect a majority of CSS users won’t even need to fire up pre-processors or post-processors. What’s more, The if() function is coming to CSS in the future too.
    But, for the rest of us who needs to make maintenance and loading performance a priority, we still need the third feature — the ability to break CSS into multiple files. This can be done with Sass’s use feature or PostCSS’s import feature (provided by the postcss-import plugin).
    PostCSS also contains plugins that can help you create conditionals, mixins, and functions should you need them.
    Although, from my experience, mixins can be better replaced with Tailwind’s @apply feature.
    This brings us to Tailwind.
    Tailwind removing support for pre-processors
    Tailwind 4 has officially removed support for pre-processors. From Tailwind’s documentation:
    If you included Tailwind 4 via its most direct installation method, you won’t be able to use pre-processors with Tailwind.
    @import `tailwindcss` That’s because this one import statement makes Tailwind incompatible with Sass, Less, and Stylus.
    But, (fortunately), Sass lets you import CSS files if the imported file contains the .css extension. So, if you wish to use Tailwind with Sass, you can. But it’s just going to be a little bit wordier.
    @layer theme, base, components, utilities; @import "tailwindcss/theme.css" layer(theme); @import "tailwindcss/preflight.css" layer(base); @import "tailwindcss/utilities.css" layer(utilities); Personally, I dislike Tailwind’s preflight styles so I exclude them from my files.
    @layer theme, base, components, utilities; @import 'tailwindcss/theme.css' layer(theme); @import 'tailwindcss/utilities.css' layer(utilities); Either way, many people won’t know you can continue to use pre-processors with Tailwind. Because of this, I suspect pre-processors will get less popular as Tailwind gains more momentum.
    Now, beneath Tailwind is a CSS post-processor called Lightning CSS, so this brings us to talking about that.
    Lightning CSS
    Lightning CSS is a post-processor can do many things that a modern developer needs — so it replaces most of the PostCSS tool chain including:
    postcss-import postcss-preset-env autoprefixer Besides having a decent set of built-in features, it wins over PostCSS because it’s incredibly fast.
    Speed helps Lightning CSS win since many developers are speed junkies who don’t mind switching tools to achieve reduced compile times. But, Lightning CSS also wins because it has great distribution.
    It can be used directly as a Vite plugin (that many frameworks support). Ryan Trimble has a step-by-step article on setting it up with Vite if you need help.
    // vite.config.mjs export default { css: { transformer: 'lightningcss' }, build: { cssMinify: 'lightningcss' } }; If you need other PostCSS plugins, you can also include that as part of the PostCSS tool chain.
    // postcss.config.js // Import other plugins... import lightning from 'postcss-lightningcss' export default { plugins: [lightning, /* Other plugins */], } Many well-known developers have switched to Lightning CSS and didn’t look back. Chris Coyier says he’ll use a “super basic CSS processing setup” so you can be assured that you are probably not stepping in any toes if you wish to switch to Lightning, too.
    If you wanna ditch pre-processors today
    You’ll need to check the features you need. Native CSS is enough for you if you need:
    CSS Variables Nesting capabilities Lightning CSS is enough for you if you need:
    CSS Variables Nesting capabilities import statements to break CSS into multiple files Tailwind (with @apply) is enough for you if you need:
    all of the above Mixins If you still need conditionals like if, for and other functions, it’s still best to stick with Sass for now. (I’ve tried and encountered interoperability issues between postcss-for and Lightning CSS that I shall not go into details here).
    That’s all I want to share with you today. I hope it helps you if you have been thinking about your CSS toolchain.
    So, You Want to Give Up CSS Pre- and Post-Processors… originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  11. by: Preethi
    Wed, 16 Apr 2025 12:34:50 +0000

    This article covers tips and tricks on effectively utilizing the CSS backdrop-filter property to style contemporary user interfaces. You’ll learn how to layer backdrop filters among multiple elements, and integrate them with other CSS graphical effects to create elaborate designs.
    Below is a hodgepodge sample of what you can build based on everything we’ll cover in this article. More examples are coming up.
    CodePen Embed Fallback The blurry, frosted glass effect is popular with developers and designers these days — maybe because Josh Comeau wrote a deep-dive about it somewhat recently — so that is what I will base my examples on. However, you can apply everything you learn here to any relevant filter. I’ll also be touching upon a few of them in my examples.
    What’s essential in a backdrop filter?
    If you’re familiar with CSS filter functions like blur() and brightness(), then you’re also familiar with backdrop filter functions. They’re the same. You can find a complete list of supported filter functions here at CSS-Tricks as well as over at MDN.
    The difference between the CSS filter and backdrop-filter properties is the affected part of an element. Backdrop filter affects the backdrop of an element, and it requires a transparent or translucent background in the element for its effect to be visible. It’s important to remember these fundamentals when using a backdrop filter, for these reasons:
    to decide on the aesthetics, to be able to layer the filters among multiple elements, and to combine filters with other CSS effects. The backdrop
    Design is subjective, but a little guidance can be helpful. If you’ve applied a blur filter to a plain background and felt the result was unsatisfactory, it could be that it needed a few embellishments, like shadows, or more often than not, it’s because the backdrop is too plain.
    Plain backdrops can be enhanced with filters like brightness(), contrast(), and invert(). Such filters play with the luminosity and hue of an element’s backdrop, creating interesting designs. Textured backdrops complement distorting filters like blur() and opacity().
    <main> <div> <section> <h1>Weather today</h1> Cloudy with a chance of meatballs. Ramenstorms at 3PM that will last for ten minutes. </section> </div> </main> main { background: center/cover url("image.jpg"); box-shadow: 0 0 10px rgba(154 201 255 / 0.6); /* etc. */ div { backdrop-filter: blur(10px); color: white; /* etc. */ } } CodePen Embed Fallback Layering elements with backdrop filters
    As we just discussed, backdrop filters require an element with a transparent or translucent background so that everything behind it, with the filters applied, is visible.
    If you’re applying backdrop filters on multiple elements that are layered above one another, set a translucent (not transparent) background to all elements except the bottommost one, which can be transparent or translucent, provided it has a backdrop. Otherwise, you won’t see the desired filter buildup.
    <main> <div> <section> <h1>Weather today</h1> Cloudy with a chance of meatballs. Ramenstorms at 3PM that will last for ten minutes. </section> <p>view details</p> </div> </main> main { background: center/cover url("image.jpg"); box-shadow: 0 0 10px rgba(154 201 255 / 0.6); /* etc. */ div { background: rgb(255 255 255 / .1); backdrop-filter: blur(10px); /* etc. */ p { backdrop-filter: brightness(0) contrast(10); /* etc. */ } } } CodePen Embed Fallback Combining backdrop filters with other CSS effects
    When an element meets a certain criterion, it gets a backdrop root (not yet a standardized name). One criterion is when an element has a filter effect (from filter and background-filter). I believe backdrop filters can work well with other CSS effects that also use a backdrop root because they all affect the same backdrop.
    Of those effects, I find two interesting: mask and mix-blend-mode. Combining backdrop-filter with mask resulted in the most reliable outcome across the major browsers in my testing. When it’s done with mix-blend-mode, the blur backdrop filter gets lost, so I won’t use it in my examples. However, I do recommend exploring mix-blend-mode with backdrop-filter.
    Backdrop filter with mask
    Unlike backdrop-filter, CSS mask affects the background and foreground (made of descendants) of an element. We can use that to our advantage and work around it when it’s not desired.
    <main> <div> <div class="bg"></div> <section> <h1>Weather today</h1> Cloudy with a chance of meatballs. Ramenstorms at 3PM that will last for ten minutes. </section> </div> </main> main { background: center/cover url("image.jpg"); box-shadow: 0 0 10px rgba(154 201 255 / 0.6); /* etc. */ > div { .bg { backdrop-filter: blur(10px); mask-image: repeating-linear-gradient(90deg, transparent, transparent 2px, white 2px, white 10px); /* etc. */ } /* etc. */ } } CodePen Embed Fallback Backdrop filter for the foreground
    We have the filter property to apply graphical effects to an element, including its foreground, so we don’t need backdrop filters for such instances. However, if you want to apply a filter to a foreground element and introduce elements within it that shouldn’t be affected by the filter, use a backdrop filter instead.
    <main> <div class="photo"> <div class="filter"></div> </div> <!-- etc. --> </main> .photo { background: center/cover url("photo.jpg"); .filter { backdrop-filter: blur(10px) brightness(110%); mask-image: radial-gradient(white 5px, transparent 6px); mask-size: 10px 10px; transition: backdrop-filter .3s linear; /* etc.*/ } &:hover .filter { backdrop-filter: none; mask-image: none; } } In the example below, hover over the blurred photo.
    CodePen Embed Fallback There are plenty of ways to play with the effects of the CSS backdrop-filter. If you want to layer the filters across stacked elements then ensure the elements on top are translucent. You can also combine the filters with other CSS standards that affect an element’s backdrop. Once again, here’s the set of UI designs I showed at the beginning of the article, that might give you some ideas on crafting your own.
    CodePen Embed Fallback References
    backdrop-filter (CSS-Tricks) backdrop-filter (MDN) Backdrop root (CSS Filter Effects Module Level 2) Filter functions (MDN) Using CSS backdrop-filter for UI Effects originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  12. by: Abhishek Kumar
    Tue, 15 Apr 2025 05:41:55 GMT

    Once upon a time, coding meant sitting down, writing structured logic, and debugging for hours.
    Fast-forward to today, and we have Vibe Coding, a trend where people let AI generate entire chunks of code based on simple prompts. No syntax, no debugging, no real understanding of what’s happening under the hood. Just vibes.
    Coined by OpenAI co-founder Andrej Karpathy, Vibe Coding is the act of developing software by giving natural language instructions to AI and accepting whatever it spits out.
    Source : XSome people even take it a step further by using voice-to-text tools so they don’t have to type at all. Just describe your dream app, and boom, the AI makes it for you. Or does it?
    People are building full-fledged SaaS products in days, launching MVPs overnight, and somehow making more money than seasoned engineers who swear by Agile methodologies.
    And here I am, writing about them instead of cashing in myself. Life isn’t fair, huh?
    But don’t get me wrong, I’m not here to hate. I’m here to expand on this interesting movement and hand you the ultimate arsenal to embrace vibe coding with these tools.
    ✋Non-FOSS Warning! Some of the applications mentioned here may not be open source. They have been included in the context of Linux usage. Also, some tools provide interface for popular, commercial LLMs like ChatGPT and Claude.1. Aider - AI pair programming in your terminal
    Aider is the perfect choice if you're looking for a pair programmer to help you ship code faster. It allows you to pair programs with LLMs to edit code in your local GitHub repository. You can start a new project or work with an existing GitHub repo—all from your terminal.
    Key Features
    ✅ Aider works best with Claude 3.7 Sonnet, DeepSeek R1 & Chat V3, OpenAI o1, o3-mini & GPT-4o, but can connect to almost any LLM, including local models.
    ✅ Aider makes a map of your entire codebase, which helps it work well in larger projects.
    ✅ Supports most popular programming languages: Python, JavaScript, Rust, Ruby, Go, C++, PHP, HTML, CSS, and more.
    ✅ Automatically commits changes with sensible commit messages. Use familiar Git tools to easily diff, manage, and undo AI changes.
    ✅ Use Aider from within your favorite IDE or editor. Ask for changes by adding comments to your code, and Aider will get to work.
    ✅ Add images and web pages to the chat to provide visual context, screenshots, and reference docs.
    ✅ Automatically lint and test your code every time Aider makes changes. It can fix problems detected by linters and test suites.
    ✅ Works best with LLM APIs but also supports web chat interfaces, making copy-pasting code seamless.
    Aider2. VannaAI - Chat with SQL Database
    Writing SQL queries can be tedious, but VannaAI changes that by letting you interact with SQL databases using natural language.
    Instead of manually crafting queries, you describe what you need, and VannaAI generates the SQL for you.
    It Works in two steps, Train a RAG "model" on your data and then ask questions that return SQL queries.
    Key Features
    ✅ Out-of-the-box support for Snowflake, BigQuery, Postgres, and more.
    ✅ The Vanna Python package and frontend integrations are all open-source, allowing deployment on your infrastructure.
    ✅ Database contents are never sent to the LLM unless explicitly enabled.
    ✅ Improves continuously by augmenting training data.
    ✅ Use Vanna in Jupyter Notebooks, Slackbots, web apps, Streamlit apps, or even integrate it into your own web app.
    VannaAI makes querying databases as easy as having a conversation, making it a game-changer for both technical and non-technical users.


    Vanna AI3. All Hands - Open source agents for developers
    All Hands is an open-source platform for AI developer agents, capable of building projects, adding features, debugging, and more.
    Competing with Devin, All Hands recently topped the SWE-bench leaderboard with 53% accuracy.
    Key Features
    ✅ Use All Hands via an interactive GUI, command-line interface (CLI), or non-interactive modes like headless execution and GitHub Actions.
    ✅ Open-source freedom, built under the MIT license to ensure AI technology remains accessible to all.
    ✅ Handles complex tasks, from code generation to debugging and issue fixing.
    ✅ Developed in collaboration with AI safety experts like Invariant Labs to balance innovation and security.
    To get started, install Docker 26.0.0+ and run OpenHands using the provided Docker commands. Once running, configure your LLM provider and start coding with AI-powered assistance.
    All Hands4. Continue - Leading AI-powered code assistant
    You must have heard about Cursor IDE, the popular AI-powered IDE; Continue is similar to it but open source under Apache license.
    It is highly customizable and lets you add any language model for auto-completion or chat. This can immensely improve your productivity. You can add Continue to VScode and JetBrains.
    Key Features
    ✅ Continue autocompletes single lines or entire sections of code in any programming language as you type.
    ✅ Attach code or other context to ask questions about functions, files, the entire codebase, and more.
    ✅ Select code sections and press a keyboard shortcut to rewrite code from natural language.
    ✅ Works with Ollama, OpenAI, Together, Anthropic, Mistral, Azure OpenAI Service, and LM Studio.
    ✅ Codebase, GitLab Issues, Documentation, Methods, Confluence pages, Files.
    ✅ Data blocks, Docs blocks, Rules blocks, MCP blocks, Prompts blocks.
    Continue5. Wave - Terminal with local LLMs
    Wave terminal introduces BYOLLM (Bring Your Own Large Language Model), allowing users to integrate their own local or cloud-based LLMs into their workflow.
    It currently supports local LLM providers such as Ollama, LM Studio, llama.cpp, and LocalAI while also enabling the use of any OpenAI API-compatible model.
    Key Features
    ✅ Use local or cloud-based LLMs, including OpenAI-compatible APIs.
    ✅ Seamlessly integrate LLM-powered responses into your terminal workflow.
    ✅ Set the AI Base URL and AI Model in the settings or via CLI.
    ✅ Plans to include support for commercial models like Gemini and Claude.
    Waveterm6. Warp terminal - Agent mode (not open source)
    After WaveTerm, we have another amazing contender in the AI-powered terminal space, Warp Terminal. I personally use this so I may sound biased. 😛
    It’s essentially an AI-powered assistant that can understand natural language, execute commands, and troubleshoot issues interactively.
    Instead of manually looking up commands or switching between documentation, you can simply describe the task in English and let Agent Mode guide you through it.
    Key Features
    ✅ No need to remember complex CLI commands, just type what you want, like "Set up an Nginx reverse proxy with SSL", and Agent Mode will handle the details.
    ✅ Ran into a “port 3000 already in use” error? Just type "fix it", and Warp will suggest running kill $(lsof -t -i:3000). If that doesn’t work, it’ll refine the approach automatically.
    ✅ Works seamlessly with Git, AWS, Kubernetes, Docker, and any other tool with a CLI. If it doesn’t know a command, you can tell it to read the help docs, and it will instantly learn how to use the tool.
    ✅ Warp doesn’t send anything to the cloud without your permission. You approve each command before it runs, and it only reads outputs when explicitly allowed.
    It seems like Warp is moving from a traditional AI-assisted terminal to an interactive AI-powered shell, making the command line much more intuitive.
    Would you consider switching to it, or do you think this level of automation might be risky for some tasks?
    Warp Terminal7. Pieces : AI extension to IDE (not open source)
    Pieces isn’t a code editor itself, it’s an AI-powered extension that supercharges editors like VS Code, Sublime Text, Neovim and many more IDE's with real-time intelligence and memory.
    Its highlighted feature is Long-Term Memory Agent that captures up to 9 months of coding context, helping you seamlessly resume work, even after a long break.
    Everything runs locally for full privacy. It understands your code, recalls snippets, and blends effortlessly into your dev tools to eliminate context switching.
    Bonus: it’s free for now, with a free tier promised forever, but they will start charging soon, so early access might come with perks.
    Key Features
    ✅ Stores 9 months of local coding context
    ✅ Integrates with Neovim, VS Code, and Sublime Text
    ✅ Fully on-device AI with zero data sharing
    ✅ Context-aware suggestions via Pieces Copilot
    ✅ Organize and share snippets using Pieces Drive
    ✅ Always-free tier promised, with early adopter perks
    Pieces8. Aidermacs: AI aided coding in Emacs
    Aidermacs by MatthewZMD is for the Emacs power users who want that sweet Cursor-style AI experience; but without leaving their beloved terminal.
    It’s a front-end for the open-source Aider, bringing powerful pair programming into Emacs with full respect for its workflows and philosophy.
    Whether you're using GPT-4, Claude, or even DeepSeek, Aidermacs auto-detects your available models and lets you chat with them directly inside Emacs.
    And yes, it's deeply customizable, as all good Emacs things should be.
    Key Features
    ✅ Integrates Aider into Emacs for collaborative coding
    ✅ Intelligent model selection from OpenAI, Anthropic, Gemini, and more
    ✅ Built-in Ediff for side-by-side AI-generated changes
    ✅ Fine-grained file control: edit, read-only, scratchpad, and external
    ✅ Fully theme-aware with Emacs-native UI integration
    ✅ Works well in terminal via vterm with theme-based colors
    Aidermacs9. Jeddict AI Assistant
    This one is for my for the Java folks, It’s a plugin for Apache NetBeans. I remember using NetBeans back in school, and if this AI stuff was around then, I swear I would've aced my CS practicals.
    This isn’t your average autocomplete tool. Jeddict AI Assistant brings full-on AI integration into your IDE: smarter code suggestions, context-aware documentation, SQL query help, even commit messages.
    It's especially helpful if you're dealing with big Java projects and want AI that actually understands what’s going on in your code.
    Key Features
    ✅ Smart, inline code completions using OpenAI, DeepSeek, Mistral, and more
    ✅ AI chat with full awareness of project/class/package context
    ✅ Javadoc creation & improvement with a single shortcut
    ✅ Variable renaming, method refactoring, and grammar fixes via AI hints
    ✅ SQL query assistance & inline completions in the database panel
    ✅ Auto-generated Git commit messages based on your diffs
    ✅ Custom rules, file context preview, and experimental in-editor updates
    ✅ Fully customizable AI provider settings (supports LM Studio, Ollama, GPT4All too!)
    Jeddict AI Assistant10. Amazon CodeWhisperer
    If your coding journey revolves around AWS services, then Amazon CodeWhisperer might be your ideal AI-powered assistant.
    While it works like other AI coding tools, its real strength lies in its deep integration with AWS SDKs, Lambda, S3, and DynamoDB.
    CodeWhisperer is fine-tuned for cloud-native development, making it a go-to choice for developers building serverless applications, microservices, and infrastructure-as-code projects.
    Since it supports Visual Studio Code and JetBrains IDEs, AWS developers can seamlessly integrate it into their workflow and get AWS-specific coding recommendations that follow best practices for scalability and security.
    Plus, individual developers get free access, making it an attractive option for solo builders and startup developers.
    Key Features
    ✅ Optimized code suggestions for AWS SDKs and cloud services.
    ✅ Built-in security scanning to detect vulnerabilities.
    ✅ Supports Python, Java, JavaScript, and more.
    ✅ Free for individual developers.
    Amazon CodeWhisperer11. Qodo AI (previously Codium)
    If you’ve ever been frustrated by the limitations of free AI coding tools, qodo might be the answer.
    Supporting over 50 programming languages, including Python, Java, C++, and TypeScript, qodo integrates smoothly with Visual Studio Code, IntelliJ, and JetBrains IDEs.
    It provides intelligent autocomplete, function suggestions, and even code documentation generation, making it a versatile tool for projects of all sizes.
    While it may not have some of the advanced features of paid alternatives, its zero-cost access makes it a game-changer for budget-conscious developers.
    Key Features
    ✅ Unlimited free code completions with no restrictions.
    ✅ Supports 50+ programming languages, including Python, Java, and TypeScript.
    ✅ Works with popular IDEs like Visual Studio Code and JetBrains.
    ✅ Lightweight and responsive, ensuring a smooth coding experience.
    QodoFinal thoughts
    📋I deliberately skipped IDEs from this list. I have a separate list of editors for vibe coding on Linux.With time, we’re undoubtedly going to see more AI-assisted coding take center stage. As Anthropic CEO Dario Amodei puts it, AI will write 90% of code within six months and could automate software development entirely within a year.
    Whether that’s an exciting leap forward or a terrifying thought depends on how much you trust your AI pair programmer.
    If you’re diving into these tools, I highly recommend brushing up on the basics of coding and version control.
    AI can write commands for you, but if you don’t know what it’s doing, you might go from “I just built the next billion-dollar SaaS!” to “Why did my AI agent just delete my entire codebase?” in a matter of seconds.
    XThat said, this curated list of amazing open-source tools should get you started. Whether you're a seasoned developer or just someone who loves typing cool things into a terminal, these tools will level up your game.
    Just remember: the AI can vibe with you, but at the end of the day, you're still the DJ of your own coding playlist (sorry for the cringy line 👉👈).
  13. by: Chris Coyier
    Mon, 14 Apr 2025 16:36:55 +0000

    I joked while talking with Adam Argyle on ShopTalk the other day that there is more CSS in one of the demos we were looking at that I have in my whole CSS brain. We were looking at his Carousel Gallery which is one of the more impressive sets of CSS demos I’ve ever seen. Don’t let your mind get too stuck on that word “carousel”. I think it’s smart to use that word here, but the CSS technologies being developed here have an incredible number of uses. Things that relate to scrolling interactivity, inertness, column layout, and more. Some of it is brand spanking new. In fact just a few weeks ago, I linked up the Carousel Configurator and said:
    Which was kind of true at the time, but the features aren’t that experimental anymore. All the features went live in Chrome 135 which is in stable release now for the world. Of course, you’ll need to think in terms of progressive enhancement if you’re looking to roll this stuff out to production, but this is real world movement on some huge stuff for CSS. This stuff is in the category where, looking a few years out, it’s a real mistake if carousels and carousel-like behavior isn’t built this way. This is the way of best performance, best semantics, and best accessibility, which ain’t gonna get beat with your MooTools Super Slider ok. Brecht is already bloggin’ about it. That’s a finger on the pulse right there.
    What else is pretty hot ‘n’ fresh in CSS land?
    CSS multicol block direction wrapping by Rachel Andrew — The first implementation of of columns being able to wrap down instead of across. Useful. Can you un-mix a mixin? by Miriam Suzanne — Mixins are likely to express themselves as @apply in CSS eventually (despite being abandoned on purpose once?). We can already sort of do it with custom properties and style queries, which actually have the desirable characteristic of cascading. What will @apply do to address that? Feature detect CSS @starting-style support by Bramus Van Damme — Someday, @supports at-rule(@starting-style) {} will work, but there (🫥) is no browser support for that yet. There is a way to do it with the space toggle trick fortunately (which is one of the most mind bending things ever in CSS if you ask me). I feel like mentioning that I was confused how to test a CSS function recently, but actually since they return values, it’s not that weird. I needed to do @supports (color: light-dark(white, black) {} which worked fine. Related to @starting-style, this is a pretty good article. New Values and Functions in CSS by Alvaro Montoro — speaking of new functions, there are a good number of them, like calc-size(), first-valid(), sibling-index(), random-item(), and more. Amazing. A keyframe combo trick by Adam Argyle — Two animations on a single element, one for the page load and one for a scroll animation. They fight. Or do they? Container Queries Unleashed by Josh Comeau — If you haven’t boned up on the now-available-everyone @container stuff, it rules, and Josh does a great job of explaining why. A Future of Themes with CSS Inline if() Conditions by Christopher Kirk-Nielsen — Looks like if() in CSS behaves like a switch in other languages and what you’re doing is checking if the value of a custom property is equal to a certain value, then returning whatever value you want. Powerful! Chris is building something like light-dark() here except with more than two themes and where the themes effect more than just color.
  14. by: Declan Chidlow
    Mon, 14 Apr 2025 12:40:46 +0000

    The cursor is a staple of the desktop interface but is scarcely touched by websites. This is for good reason. People expect their cursors to stay fairly consistent, and meddling with them can unnecessarily confuse users. Custom cursors also aren’t visible for people using touch interfaces — which excludes the majority of people.
    Geoff has already covered styling cursors with CSS pretty comprehensively in “Changing the Cursor with CSS for Better User Experience (or Fun)” so this post is going to focus on complex and interesting styling.
    Custom cursors with JavaScript
    Custom cursors with CSS are great, but we can take things to the next level with JavaScript. Using JavaScript, we can use an element as our cursor, which lets us style it however we would anything else. This lets us transition between cursor states, place dynamic text within the cursor, apply complex animations, and apply filters.
    In its most basic form, we just need a div that continuously positions itself to the cursor location. We can do this with the mousemove event listener. While we’re at it, we may as well add a cool little effect when clicking via the mousedown event listener.
    CodePen Embed Fallback That’s wonderful. Now we’ve got a bit of a custom cursor going that scales on click. You can see that it is positioned based on the mouse coordinates relative to the page with JavaScript. We do still have our default cursor showing though, and it is important for our new cursor to indicate intent, such as changing when hovering over something clickable.
    We can disable the default cursor display completely by adding the CSS rule cursor: none to *. Be aware that some browsers will show the cursor regardless if the document height isn’t 100% filled.
    We’ll also need to add pointer-events: none to our cursor element to prevent it from blocking our interactions, and we’ll show a custom effect when hovering certain elements by adding the pressable class.
    CodePen Embed Fallback Very nice. That’s a lovely little circular cursor we’ve got here.
    Fallbacks, accessibility, and touchscreens
    People don’t need a cursor when interacting with touchscreens, so we can disable ours. And if we’re doing really funky things, we might also wish to disable our cursor for users who have the prefers-reduced-motion preference set.
    We can do this without too much hassle:
    CodePen Embed Fallback What we’re doing here is checking if the user is accessing the site with a touchscreen or if they prefer reduced motion and then only enabling the custom cursor if they aren’t. Because this is handled with JavaScript, it also means that the custom cursor will only show if the JavaScript is active, otherwise falling back to the default cursor functionality as defined by the browser.
    const isTouchDevice = "ontouchstart"in window || navigator.maxTouchPoints > 0; const prefersReducedMotion = window.matchMedia("(prefers-reduced-motion: reduce)").matches; if (!isTouchDevice && !prefersReducedMotion && cursor) { // Cursor implementation is here } Currently, the website falls back to the default cursors if JavaScript isn’t enabled, but we could set a fallback cursor more similar to our styled one with a bit of CSS. Progressive enhancement is where it’s at!
    Here we’re just using a very basic 32px by 32px base64-encoded circle. The 16 values position the cursor hotspot to the center.
    html { cursor: url("data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIzMiIgaGVpZ2h0PSIzMiIgdmlld0Jve D0iMCAwIDMyIDMyIj4KICA8Y2lyY2xlIGN4PSIxNiIgY3k9IjE2IiByPSIxNiIgZmlsbD0iYmxhY2siIC8+Cjwvc3ZnPg==") 16 16, auto; } Taking this further
    Obviously this is just the start. You can go ballistic and completely overhaul the cursor experience. You can make it invert what is behind it with a filter, you can animate it, you could offset it from its actual location, or anything else your heart desires.
    As a little bit of inspiration, some really cool uses of custom cursors include:
    Studio Mesmer switches out the default cursor for a custom eye graphic when hovering cards, which is tasteful and fits their brand. Iara Grinspun’s portfolio has a cursor implemented with JavaScript that is circular and slightly delayed from the actual position which makes it feel floaty. Marlène Bruhat’s portfolio has a sleek cursor that is paired with a gradient that appears behind page elements. Aleksandr Yaremenko’s portfolio features a cursor that isn’t super complex but certainly stands out as a statement piece. Terra features a giant glowing orb containing text describing what you’re hovering over. Please do take care when replacing browser or native operating system features in this manner. The web is accessible by default, and we should take care to not undermine this. Use your power as a developer with taste and restraint.
    Next Level CSS Styling for Cursors originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  15. by: Abhishek Prakash
    Mon, 14 Apr 2025 10:58:44 +0530

    Lately, whenever I tried accessing a server via SSH, it asked for a passphrase:
    Enter passphrase for key '/home/abhishek/.ssh/id_rsa':Interestingly, it was asking for my local system's account password, not the remote server's.
    Entering the account password for SSH key is a pain. So, I fixed it with this command which basically resets the password:
    ssh-keygen -pIt then asked for the file which has the key. This is the private ssh key, usually located in .ssh/id_rsa file. I provided the absolute path for that.
    Now it asked for the 'old passphrase' which is the local user account password. I provided it one more time and then just pressed enter for the new passphrase.
    ❯ ssh-keygen -p Enter file in which the key is (/home/abhishek/.ssh/id_ed25519): /home/abhishek/.ssh/id_rsa Enter old passphrase: Enter new passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved with the new passphrase.And thus, it didn't ask you to enter passphrase for the SSH private key anymore. Did not even need a reboot or anything.
    Wondering why it happened and how it was fixed? Let's go in detail.
    What caused 'Enter passphrase for key' issue?
    Here is my efficient SSH workflow. I have the same set of SSH keys on my personal systems, so I don't have to create them new and add them to the servers when I install a new distro.
    Since the public SSH key is added to the servers, I don't have to enter the root password for the servers every time I use SSH.
    And then I have an SSH config file in place that maps the server's IP address with an easily identifiable name. It further smoothens my workflow.
    Recently, I switched my personal system to CachyOS. I copied my usual SSH keys from an earlier backup and gave them the right permission.
    But when I tried accessing any server, it asked for a passphrase:
    Enter passphrase for key '/home/abhishek/.ssh/id_rsa':No, it was not the remote server's user-password. It asked for my regular, local system's password as if I were using sudo.
    I am guessing that some settings somewhere were left untouched and it started requiring a password to unlock the private SSH key.
    This is an extra layer of security, and I don't like the inconvenience that comes with it.
    One method to use SSH without entering the password each time to unlock is to reset the password on the SSH key.
    And that's what you saw at the beginning of this article.
    Fixing it by resetting the password on SSH key
    Note down the location of your SSH private key. Usually, it is ~/.ssh/id_rsa unless you have multiple SSH key sets for different servers.
    Enter the following command to reset the password on an SSH key:
    ssh-keygen -pIt will ask you for the path to key. Provide the absolute path to your private SSH key.
    Enter file in which the key is (/home/abhishek/.ssh/id_ed25519):It then asks to enter old passphrase which should your local account's password. The same one that you use for sudo.
    Enter old passphrase:Once you have entered that, it will ask you to enter new passphrase. Keep it empty by pressing the enter key. This way, it won't have any password.
    Enter new passphrase (empty for no passphrase):Press enter key again when it asks:
    Enter same passphrase again:And that's about it.
    You can instantly verify it. You don't need to reboot the system or even log out from the terminal.
    Enjoy SSH 😄
  16. Birth of Unix

    by: John Paul Wohlscheid
    Sun, 13 Apr 2025 14:34:36 GMT

    Sometimes it feels like Unix has been around forever, at least to users who have used Linux, or BSD in any form for a decade or more now.
    Its ideals laid the groundwork for Linux, and it underpins macOS. A modern version (FreeBSD) is used on thousands of servers while Linux rules the server space along with the super computer industry.
    Even though the original form of it is a history, it remains a significant development to help start Linux and more.
    But initially, it had a rocky start and had to be developed in secret.
    Punch Cards and Multics
    Back in the days when computers took up whole rooms, the main method of using computers was the punch card interface. Computers didn't come with an operating system, they had a programming language built into them. If you wanted to run a program, you had to use a device to enter your program and the data on a series of punch cards.
    According to an interview with Brian Kernighan, one of the Unix creators, "So if you had a 1,000-line program, you would have 1,000 cards. There were no screens, no interactive output. You gave your cards to the computer operator and waited for your printout that was the result of your program."
    At the time, all text output from these computers was capitalized. Kernighan wrote an application to handle the formatting of his thesis. "And so thesis was basically three boxes of cards, 6,000 cards in each box, probably weighed 10, 12 pounds, five kilograms. And so you’d take these three boxes, 1,000 cards of which the first half of the first box was the program and then the remaining 5,000 cards was the thesis. And you would take those three boxes and you’d hand them to the operator. And an hour or two or three later back would come a printed version of thesis again."

    Needless to say, this makes modern thesis writing seem effortless, right?
    In the late 1950s, AT&T, Massachusetts Institute of Technology, and General Electric created a project to revolutionize computing and push it beyond the punch card.
    The project was named Multics or “Multiplexed Information and Computing Service”. According to the paper that laid out the plans for the project, there were nine major goals:
    Convenient remote terminal use. Continuous operation analogous to power & telephone services. A wide range of system configurations, changeable without system or user program reorganization. A high reliability internal file system. Support for selective information sharing. Hierarchical structures of information for system administration and decentralization of user activities. Support for a wide range of applications. Support for multiple programming environments & human interfaces. The ability to evolve the system with changes in technology and in user aspirations. Multics would be a time-sharing computer, instead of relying on punch cards. This means that users could log into the system via a terminal and use it for an allotted period of time. This would turn the computer from a system administered by a high priest class (Steven Levy mentioned this concept in his book Hackers.) to something that could be accessed by anyone with the necessary knowledge.
    The project was very ambitious. Unfortunately, turning ideas into reality takes time. Bell Labs withdrew from the project in 1969. They had joined the project to get a time-sharing operating system for their employees, but there had been little progress.
    The lessons learned from Multics eventually helped in the creation of Unix, more on that below.
    To Space Beyond
    Image Credits: Multicians / A team installing GE 645 mainframe in ParisThe Bell engineers who had worked on Multics (including Ken Thompson and Dennis Ritchie) were left without an operating system, but tons of ideas. In the last days of their involvement in the Multics, they had started writing an operating system on a GE-645 mainframe. But then the project ended, and they no longer needed the mainframe.
    They lobbied their bosses to buy a mini-computer to start their own operating system project but were denied. They continued to work on the project in secret. Often they would get together and discuss what they would want in an operating system and sketch out ideas for the architecture.
    During this time, Thompson started working on a little side project. He wrote a game for the GE-645 named Space Travel. The game "simulated all the major bodies in the solar system along with a spaceship that could fly around them".

    Unfortunately, it was expensive to run on the mainframe. Each game cost $75 to play. So, Thompson went looking for a different, cheaper computer to use. He discovered a PDP-7 mini-computer left over from a previous project. He rewrote the game to run on the PDP-7.
    PDP-7, Image Credits: WikipediaIn the summer of 1969, Thompson's wife took their newborn son to visit her parents. Thompson took advantage of this time and newly learned programming skills to start writing an operating system for the PDP-7. Since he saw this new project as a cut-down version of Multics, he named it “Un-multiplexed Information and Computing Service," or Unics. It was eventually changed to Unix.
    Other Bell Labs employees joined the project. The team quickly ran into limitations with the hardware itself. The PDP-7 was in its early stages, so they had to figure out how to get their hands on a newer computer. They knew that their bosses would never buy a new system because "lab's management wasn't about to allow any more research on operating systems."
    At the time, Bell Labs produced lots of patents. According to Kernighan, "typically one or two a day at that point." It was time-consuming to create applications for those patents because the formatting required by the government was very specific.

    At the time, there were no commercial word processing programs capable of handling the formatting. The Unix group offered to write a program for the patent department that would run on a shiny new PDP-11. They also promised to have it done before any commercial software would be available to do the same. Of course, they failed to mention that they would need to write an operating system for the software to run on.
    Their bosses agreed to the proposal and placed an order for a PDP-11 in May 1970. The computer arrived quickly, but it took six months for the drives to arrive.
    PD-11/70, Image Credits: Wikipedia In the meantime, the team continued to write Unix on the PDP-7, making it the first platform where the first version of Unix developed. Once the PDP-11 was up and running, the team ported what they had to the new system. In short order, the new patent application software was unveiled to the patent department. It was a hit. The management was so pleased with the results, they bought the Unix team their own PDP-11.
    Growing and Legal Problems
    Image Credits: AmazonWith a more powerful computer at their command, work on Unix continued. In 1971, the team released its first official manual: The UNIX Programmer's Manual. The operating system was officially debuted to the world via a paper presented at the 1973 symposium of the Association for Computing Machinery. This was followed by a flood of requests for copies.
    This brought up new issues. AT&T, the company that financed Bell Labs, couldn't sell an operating system. In 1956, AT&T was forced by the US government to agree to a consent decree.

    This consent decree prohibited AT&T from "selling products not directly related to telephones and telecommunications, in return for its legal monopoly status in running the country's long-distance phone service." The solution was to release "the Unix source code under license to anyone who asked, charging only a nominal fee".
    The consent decree also prohibited AT&T from providing tech support. So, the code was essentially available as-is. This led to the creation of the first user groups as Unix adopters banded together to provide mutual assistance.
    C Programming, The Necessary Catalyst
    The creation of the C programming language by Dennis Ritchie at Bell Labs helped Unix make progress with its future versions, and indirectly influenced the ability to create BSD and Linux.
    And, now, we have many programming languages, operating systems, including several variants of Linux, BSD, and Unix-like operating systems as well.
  17. By: Janus Atienza
    Sat, 12 Apr 2025 18:30:58 +0000

    Have you ever searched your name or your brand and found content that you didn’t expect to see? 
    Maybe a page that doesn’t represent you well or something you want to keep track of for your records? 
    If you’re using Linux or Unix, you’re in a great position to take control of that situation. With just a few simple tools, you can save, organize, and monitor any kind of web content with ease. 
    This guide walks you through how to do that, step by step, using tools built right into your system.
    This isn’t just about removing content. It’s also about staying informed, being proactive, and using the strengths of Linux and Unix to help you manage your digital presence in a reliable way.
    Let’s take a look at how you can start documenting web content using your system.
    Why Organizing Online Content Is a Smart Move
    When something important appears online—like an article that mentions you, a review of your product, or even a discussion thread—it helps to keep a copy for reference. Many platforms and services ask for details if you want them to update or review content. Having all the right information at your fingertips can make things smoother.
    Good records also help with transparency. You’ll know exactly what was published and when, and you’ll have everything you need if you ever want to take action on it.
    Linux and Unix systems are perfect for this kind of work because they give you flexible tools to collect and manage web content without needing extra software. Everything you need is already available or easily installable.
    Start by Saving the Page with wget
    The first step is to make sure you have a full copy of the page you’re interested in. This isn’t just about saving a screenshot—it’s about capturing the full experience of the page, including images, links, and layout.
    You can do this with a built-in tool called wget. It’s easy to use and very reliable.
    Here’s a basic command:
    css
    CopyEdit
    wget –mirror –convert-links –adjust-extension –page-requisites –no-parent https://example.com/the-page
    This command downloads the full version of the page and saves it to your computer. You can organize your saved pages by date, using a folder name like saved_pages_2025-04-10 so everything stays neat and searchable.
    If you don’t have wget already, most systems let you install it quickly with a package manager like apt or yum.
    Keep a Log of Your Terminal Session
    If you’re working in the terminal, it’s helpful to keep a record of everything you do while gathering your content. This shows a clear trail of how you accessed the information.
    The script command helps with this. It starts logging everything that happens in your terminal into a text file.
    Just type:
    perl
    CopyEdit
    script session_log_$(date +%F_%H-%M-%S).txt
    Then go ahead and run your commands, visit links, or collect files. When you’re done, just type exit to stop the log. This gives you a timestamped file that shows everything you did during that session, which can be useful if you want to look back later.
    Capture Screenshots with a Timestamp
    Screenshots are one of the easiest ways to show what you saw on a page. In Linux or Unix, there are a couple of simple tools for this.
    If you’re using a graphical environment, scrot is a great tool for quick screenshots:
    nginx
    CopyEdit
    scrot ‘%Y-%m-%d_%H-%M-%S.png’ -e ‘mv $f ~/screenshots/’
    If you have ImageMagick installed, you can use:
    perl
    CopyEdit
    import -window root ~/screenshots/$(date +%F_%H-%M-%S).png
    These tools save screenshots with the date and time in the filename, which makes it super easy to sort and find them later. You can also create a folder called screenshots in your home directory to keep things tidy.
    Use Checksums to Confirm File Integrity
    When you’re saving evidence or tracking content over time, it’s a good idea to keep track of your files’ integrity. A simple way to do this is by creating a hash value for each file.
    Linux and Unix systems come with a tool called sha256sum that makes this easy.
    Here’s how you can use it:
    bash
    CopyEdit
    sha256sum saved_page.html > hash_log.txt
    This creates a unique signature for the file. If you ever need to prove that the file hasn’t changed, you can compare the current hash with the original one. It’s a good way to maintain confidence in your saved content.
    Organize Your Files in Folders
    The key to staying organized is to keep everything related to one event or day in the same folder. You can create a structure like this:
    bash
    CopyEdit
    ~/web_monitoring/
      2025-04-10/
        saved_page.html
        screenshot1.png
        session_log.txt
        hash_log.txt
    This kind of structure makes it easy to find and access your saved pages later. You can even back these folders up to cloud storage or an external drive for safekeeping.
    Set Up a Simple Monitor Script
    If you want to stay on top of new mentions or changes to a particular site or keyword, you can create a simple watch script using the command line.
    One popular method is to use curl to grab search results, then filter them with tools like grep.
    For example:
    bash
    CopyEdit
    curl -s “https://www.google.com/search?q=your+name” > ~/search_logs/google_$(date +%F).html
    You can review the saved file manually or use commands to highlight certain keywords. You can also compare today’s results with yesterday’s using the diff command to spot new mentions. Additionally if needed you can also go for how do you delete a google search result.
    To automate this, just create a cron job that runs the script every day:
    nginx
    CopyEdit
    crontab -e
    Then add a line like this:
    ruby
    CopyEdit
    0 7 * * * /home/user/scripts/search_watch.sh
    This runs the script at 7 a.m. daily and stores the results in a folder you choose. Over time, you’ll build a personal archive of search results that you can refer to anytime.
    Prepare Your Submission Package
    If you ever need to contact a website or a service provider about a page, it’s helpful to have everything ready in one place. That way, you can share what you’ve collected clearly and professionally.
    Here’s what you might include:
    The exact URL of the page A brief explanation of why you’re reaching out A copy of the page you saved One or more screenshots A summary of what you’re requesting Some platforms also have forms or tools you can use. For example, search engines may provide an online form for submitting requests.
    If you want to contact a site directly, you can use the whois command to find the owner or hosting provider:
    nginx
    CopyEdit
    whois example.com
    This will give you useful contact information or point you toward the company that hosts the site.
    Automate Your Process with Cron
    Once you have everything set up, you can automate the entire workflow using cron jobs. These scheduled tasks let your system do the work while you focus on other things.
    For example, you can schedule daily page saves, keyword searches, or hash checks. This makes your documentation process consistent and thorough, without any extra effort after setup.
    Linux and Unix give you the tools to turn this into a fully automated system. It’s a great way to stay prepared and organized using technology you already have.
    Final Thoughts
    Linux and Unix users have a unique advantage when it comes to documenting web content. With simple tools like wget, script, and scrot, you can create a complete, organized snapshot of any page or event online. These tools aren’t just powerful—they’re also flexible and easy to use once you get the hang of them.
    The post Best Way to Document Harmful Content for Removal appeared first on Unixmen.
  18. Formatting Text in Logseq

    by: Sreenath
    Fri, 11 Apr 2025 15:09:58 GMT

    Logseq is a highly efficient note-taking and knowledge management app with decent Markdown support.
    While using Logseq, one thing to keep in mind is that the text formatting isn't pure Markdown. This is because Logseq uses bullet blocks as the basic unit of content and also supports Org-mode.
    Whenever you start a new document or press Enter after a sentence, a new block is created — and this block can be referenced from anywhere within Logseq. That’s part of what makes Logseq so powerful.
    Still, formatting your notes clearly is just as important. In this article, we’ll take a closer look at how text formatting works in Logseq.
    Basic Markdown syntax
    As I said above, since Logseq supports Markdown, all the basic Markdown syntax will work here.
    You remember the Markdown syntax, right?
    Description Markdown Syntax Six Levels of Heading # Level One
    ## Level Two
    ### Level Three
    #### Level Four
    ##### Level Five
    ###### Level Six Hyprlink [Link Text](Link Address/URL) Image ![Image Caption](Image path) Bold Text **Bold Text** Italics Text *Italics* Striked-out Text ~~Striked-out Text~~ In-line code `inline code` Code block ```
    code block
    ``` Table |Column Header|Column Header|
    | ---------------- | ---------------|
    | Items | Items |
    Logseq Markdown Rendering💡You can press the / key to get all the available format options.Adding quotes
    Quotes can be added in Logseq using two methods.
    First, using the traditional Markdown method of adding a quote by using > in front of the text.
    > This should appear as a quote Second, since Logseq has Org-mode support, you can create a quote block using the syntax:
    #+BEGIN_QUOTE Your Quote text here #+END_QUOTE You can access this by pressing < key and then typing Quote and enter.
    🚧If you are using the quotes with a preceding > syntax, then every markdown renderer will render the document properly. The org-mode syntax won't work in all environments. 0:00 /0:15 1× Adding Quotes in Logseq
    Add an admonition block
    Admonition blocks or callouts come in handy for highlighting particular piece of information in your notes, like a tip or a warning.
    The warning below is the best example here.
    🚧These admonition blocks are a feature of Logseq app. You cannot expect this to work properly in other apps. So, plain text markdown users should take care in this scenario.The usual Org-mode syntax for these blocks is:
    #+BEGIN_<BLOCK NAME> Your Block Text #+END_<BLOCK NAME> For example, a simple tip block syntax looks like:
    #+BEGIN_TIP This is a tip block #+END_TIP Let's take a look at some other interesting syntax names:
    BLOCK NAME NOTE TIP IMPORTANT CAUTION PINNED Admonition Blocks in Logseq.You can access this by typing the < key and then searching for the required block.
    0:00 /0:27 1× Admonition blocks in Logseq.
    Conclusion
    The ability to add a call out box makes your notes more useful, in my opinion. At least it does for me as I can highlight important information in my notes. I am a fan of them and you can see plenty of them in my articles on It's FOSS as well.
    Stay tuned with me in this series as I'll share about adding references in Logseq in the next part.
  19. CSS-Tricks Chronicles XLIII

    by: Geoff Graham
    Fri, 11 Apr 2025 12:39:26 +0000

    Normally, I like to publish one of these updates every few months. But seeing as the last one dates back to September of last year, I’m well off that mark and figured it’s high time to put pen to paper. The fact is that a lot is happening around here at CSS-Tricks — and it’s all good stuff.
    The Almanac is rolling
    In the last post of 2024, I said that filling the Almanac was a top priority heading into this year. We had recently refreshed the whole dang thing, complete with completely new sections for documenting CSS selectors, at-rules, and functions on top of the sections we already had for properties and pseudo-selectors. The only problem is that those new sections were pretty bare.
    Well, not only has this team stepped up to produce a bunch of new content for those new sections, but so have you. Together, we’ve published 21 new Almanac entries since the start of 2025. Here they are in all their glory:
    animation-timeline interpolate-size overlay @charset @counter-style @import @keyframes @namspace @page @view-transition attr() calc-size() counter() counters() hsl() lab() lch() light-dark() oklch() rgb() symbols() What’s even better? There are currently fourteen more in the hopper that we’re actively working on. I certainly do not expect us to sustain this sort of pace all year. A lot of work goes into each and every entry. Plus, if all we ever did was write in Almanac, we would never get new articles and tutorials out to you, which is really what we’re all about around here.
    A lot of podcasts and events
    Those of you who know me know that I’m not the most social person in all the land. Yes, I like hanging out with folks and all that, but I tend to keep my activities to back-of-the-house stuff and prefer to stay out of view.
    So, that’s why it’s weird for me to call out a few recent podcast and event appearances. It’s not like I do these things all that often, but they are fun and I like to note them, even if its only for posterity.
    I hosted Smashing Meets Accessibility, a mini online conference that featured three amazing speakers talking about the ins and outs of WCAG conformance, best practices, and incredible personal experiences shaped by disability. I hosted Smashing Meets CSS, another mini conference from the wonderful Smashing Magazine team. I got to hang out with Adam Argyle, Julia Micene, and Miriam Suzanne, all of whom blew my socks off with their presentations and panel discussion on what’s new and possible in modern CSS. I’m co-hosting a brand-new podcast with Brad Frost called Open Up! We recorded the first episode live in front of an audience that was allowed to speak up and participate in the conversation. The whole idea of the show is that we talk more about the things we tend to talk less about in our work as web designers and developers — the touchy-feely side of what we do. We covered so many heady topics, from desperation during layoffs to rediscovering purpose in your work. I was a guest on the Mental Health in Tech podcast, joining a panel of other front-enders to discuss angst in the face of recent technological developments. The speed and constant drive to market new technologies is dizzying and, to me at least, off-putting to the extent that I’ve questioned my entire place in it as a developer. What a blast getting to return to the podcast a second time and talk shop with a bunch of the most thoughtful, insightful people you’ll ever hear. I’ll share that when it’s published. A new guide on styling counters
    We published it just the other week! I’ll be honest and say that a complete guide about styling counters in CSS was not the first thing that came to my mind when we started planning new ideas, but I’ll be darned if Juan didn’t demonstrate just how big a topic it is. There are so many considerations when it comes to styling counters — design! accessibility! semantics! — and the number of tools we have in CSS to style them is mind-boggling, including two functions that look very similar but have vastly different capabilities for creating custom counters — counter() and counters() (which are also freshly published in the Almanac).
    At the end of last year, I said I hoped to publish 1-2 new guides, and here we are in the first quarter of 2025 with our first one out in the wild! That gives me hope that we’ll be able to get another comprehensive guide out before the end of the year.
    Authors
    I think the most exciting update of all is getting to recognize the folks who have published new articles with us since the last update. Please help me extend a huge round of applause to all the faces who have rolled up their sleeves and shared their knowledge with us.
    Lee Meyer Zell Liew Andy Clarke (I can’t believe it!) Temani Afif Andy Bell Preethi Daniel Schwarz Bryan Robinson Sunkanmi Fafowora And, of course, nothing on this site would be possible without ongoing help from Juan Diego Rodriguez and Ryan Trimble. Those two not only do a lot of heavy lifting to keep the content machine fed, but they are also just two wonderful people who make my job a lot more fun and enjoyable. Seriously, guys, you mean a lot to this site and me!
    CSS-Tricks Chronicles XLIII originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  20. by: Abhishek Prakash
    Fri, 11 Apr 2025 17:22:49 +0530

    Linux can feel like a big world when you're just getting started — but you don’t have to figure it all out on your own.
    Each edition of LHB Linux Digest brings you clear, helpful articles and quick tips to make everyday tasks a little easier.
    Chances are, a few things here will click with you — and when they do, try working them into your regular routine. Over time, those small changes add up and before you know it, you’ll feel more confident and capable navigating your Linux setup.
    Here are the highlights of this edition:
    Running sudo without password Port mapping in Docker Docker log viewer tool And more tools, tips and memes for you This edition of LHB Linux Digest newsletter is supported by Typesense. ❇️ Typesense, Open Source Algolia Alternative
    Typesense is the free, open-source search engine for forward-looking devs.
    Make it easy on people: Tpyos? Typesense knows we mean typos, and they happen. With ML-powered typo tolerance and semantic search, Typesense helps your customers find what they’re looking for—fast.
    👉 Check them out on GitHub.
    GitHub - typesense/typesense: Open Source alternative to Algolia + Pinecone and an Easier-to-Use alternative to ElasticSearch ⚡ 🔍 ✨ Fast, typo tolerant, in-memory fuzzy Search Engine for building delightful search experiencesOpen Source alternative to Algolia + Pinecone and an Easier-to-Use alternative to ElasticSearch ⚡ 🔍 ✨ Fast, typo tolerant, in-memory fuzzy Search Engine for building delightful search experiences -…GitHubtypesense  
     
      This post is for subscribers only
    Subscribe now Already have an account? Sign in
  21. by: Zell Liew
    Thu, 10 Apr 2025 12:39:43 +0000

    By this point, it’s not a secret to most people that I like Tailwind.
    But, unknown to many people (who often jump to conclusions when you mention Tailwind), I don’t like vanilla Tailwind. In fact, I find most of it horrible and I shall refrain from saying further unkind words about it.
    But I recognize and see that Tailwind’s methodology has merits — lots of them, in fact — and they go a long way to making your styles more maintainable and performant.
    Today, I want to explore one of these merit-producing features that has been severely undersold — Tailwind’s @apply feature.
    What @apply does
    Tailwind’s @apply features lets you “apply” (or simply put, copy-and-paste) a Tailwind utility into your CSS.
    Most of the time, people showcase Tailwind’s @apply feature with one of Tailwind’s single-property utilities (which changes a single CSS declaration). When showcased this way, @apply doesn’t sound promising at all. It sounds downright stupid. So obviously, nobody wants to use it.
    /* Input */ .selector { @apply p-4; } /* Output */ .selector { padding: 1rem; } To make it worse, Adam Wathan recommends against using @apply, so the uptake couldn’t be worse.
    Personally, I think Tailwind’s @apply feature is better than described.
    Tailwind’s @apply is like Sass’s @includes
    If you have been around during the time where Sass is the dominant CSS processing tool, you’ve probably heard of Sass mixins. They are blocks of code that you can make — in advance — to copy-paste into the rest of your code.
    To create a mixin, you use @mixin To use a mixin, you use @includes // Defining the mixin @mixin some-mixin() { color: red; background: blue; } // Using the mixin .selector { @include some-mixin(); } /* Output */ .selector { color: red; background: blue; } Tailwind’s @apply feature works the same way. You can define Tailwind utilities in advance and use them later in your code.
    /* Defining the utility */ @utility some-utility { color: red; background: blue; } /* Applying the utility */ .selector { @apply some-utility; } /* Output */ .selector { color: red; background: blue; } Tailwind utilities are much better than Sass mixins
    Tailwind’s utilities can be used directly in the HTML, so you don’t have to write a CSS rule for it to work.
    @utility some-utility { color: red; background: blue; } <div class="some-utility">...</div> On the contrary, for Sass mixins, you need to create an extra selector to house your @includes before using them in the HTML. That’s one extra step. Many of these extra steps add up to a lot.
    @mixin some-mixin() { color: red; background: blue; } .selector { @include some-mixin(); } /* Output */ .selector { color: red; background: blue; } <div class="selector">...</div> Tailwind’s utilities can also be used with their responsive variants. This unlocks media queries straight in the HTML and can be a superpower for creating responsive layouts.
    <div class="utility1 md:utility2">…</div> A simple and practical example
    One of my favorite — and most easily understood — examples of all time is a combination of two utilities that I’ve built for Splendid Layouts (a part of Splendid Labz):
    vertical: makes a vertical layout horizontal: makes a horizontal layout Defining these two utilities is easy.
    For vertical, we can use flexbox with flex-direction set to column. For horizontal, we use flexbox with flex-direction set to row. @utility horizontal { display: flex; flex-direction: row; gap: 1rem; } @utility vertical { display: flex; flex-direction: column; gap: 1rem; } After defining these utilities, we can use them directly inside the HTML. So, if we want to create a vertical layout on mobile and a horizontal one on tablet or desktop, we can use the following classes:
    <div class="vertical sm:horizontal">...</div> For those who are new to Tailwind, sm: here is a breakpoint variant that tells Tailwind to activate a class when it goes beyond a certain breakpoint. By default, sm is set to 640px, so the above HTML produces a vertical layout on mobile, then switches to a horizontal layout at 640px.
    Open Live Demo If you prefer traditional CSS over composing classes like the example above, you can treat @apply like Sass @includes and use them directly in your CSS.
    <div class="your-layout">...</div> .your-layout { @apply vertical; @media (width >= 640px) { @apply horizontal; } } The beautiful part about both of these approaches is you can immediately see what’s happening with your layout — in plain English — without parsing code through a CSS lens. This means faster recognition and more maintainable code in the long run.
    Tailwind’s utilities are a little less powerful compared to Sass mixins
    Sass mixins are more powerful than Tailwind utilities because:
    They let you use multiple variables. They let you use other Sass features like @if and @for loops. @mixin avatar($size, $circle: false) { width: $size; height: $size; @if $circle { border-radius: math.div($size, 2); } } On the other hand, Tailwind utilities don’t have these powers. At the very maximum, Tailwind can let you take in one variable through their functional utilities.
    /* Tailwind Functional Utility */ @utility tab-* { tab-size: --value(--tab-size-*); } Fortunately, we’re not affected by this “lack of power” much because we can take advantage of all modern CSS improvements — including CSS variables. This gives you a ton of room to create very useful utilities.
    Let’s go through another example
    A second example I often like to showcase is the grid-simple utility that lets you create grids with CSS Grid easily.
    We can declare a simple example here:
    @utility grid-simple { display: grid; grid-template-columns: repeat(var(--cols), minmax(0, 1fr)); gap: var(--gap, 1rem); } By doing this, we have effectively created a reusable CSS grid (and we no longer have to manually declare minmax everywhere).
    After we have defined this utility, we can use Tailwind’s arbitrary properties to adjust the number of columns on the fly.
    <div class="grid-simple [--cols:3]"> <div class="item">...</div> <div class="item">...</div> <div class="item">...</div> </div> To make the grid responsive, we can add Tailwind’s responsive variants with arbitrary properties so we only set --cols:3 on a larger breakpoint.
    <div class="grid-simple sm:[--cols:3]"> <div class="item">...</div> <div class="item">...</div> <div class="item">...</div> </div> Open Live Demo This makes your layouts very declarative. You can immediately tell what’s going on when you read the HTML.
    Now, on the other hand, if you’re uncomfortable with too much Tailwind magic, you can always use @apply to copy-paste the utility into your CSS. This way, you don’t have to bother writing repeat and minmax declarations every time you need a grid that grid-simple can create.
    .your-layout { @apply grid-simple; @media (width >= 640px) { --cols: 3; } } <div class="your-layout"> ... </div> By the way, using @apply this way is surprisingly useful for creating complex layouts! But that seems out of scope for this article so I’ll be happy to show you an example another day.
    Wrapping up
    Tailwind’s utilities are very powerful by themselves, but they’re even more powerful if you allow yourself to use @apply (and allow yourself to detach from traditional Tailwind advice). By doing this, you gain access to Tailwind as a tool instead of it being a dogmatic approach.
    To make Tailwind’s utilities even more powerful, you might want to consider building utilities that can help you create layouts and nice visual effects quickly and easily.
    I’ve built a handful of these utilities for Splendid Labz and I’m happy to share them with you if you’re interested! Just check out Splendid Layouts to see a subset of the utilities I’ve prepared.
    By the way, the utilities I showed you above are watered-down versions of the actual ones I’m using in Splendid Labz.
    One more note: When writing this, Splendid Layouts work with Tailwind 3, not Tailwind 4. I’m working on a release soon, so sign up for updates if you’re interested!
    Tailwind’s @apply Feature is Better Than it Sounds originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  22. by: Geoff Graham
    Thu, 10 Apr 2025 11:26:00 +0000

    If I were starting with CSS today for the very first time, I would first want to spend time understanding writing modes because that’s a great place to wrap your head around direction and document flow. But right after that, and even more excitedly so, I would jump right into display and get a firm grasp on layout strategies.
    And where would I learn that? There are lots of great resources out there. I mean, I have a full course called The Basics that gets into all that. I’d say you’d do yourself justice getting that from Andy Bell’s Complete CSS course as well.
    But, hey, here’s a brand new way to bone up on layout: Miriam Suzanne is running a workshop later this month. Cascading Layouts is all about building more resilient and maintainable web layouts using modern CSS, without relying on third-party tools. Remember, Miriam works on CSS specifications, is a core contributor to Sass, and is just plain an all-around great educator. There are few, if any, who are more qualified to cover the ins and outs of CSS layout, and I can tell you that her work really helped inspire and inform the content in my course. The workshop is online, runs April 28-30, and is a whopping $ 100 off if you register by April 12.
    Just a taste of what’s included:

    Cascading Layouts: A Workshop on Resilient CSS Layouts originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  23. by: Abhishek Prakash
    Thu, 10 Apr 2025 05:17:14 GMT

    Linux YouTuber Brodie Robertson liked It's FOSS' April Fool joke so much that he made a detailed video on it. It's quite fun to watch, actually 😄
    💬 Let's see what else you get in this edition
    A new APT release. Photo management software Steam Client offering many refinements for Linux. And other Linux news, tips, and, of course, memes! This edition of FOSS Weekly is supported by Internxt. SPONSORED ❇️ Future-Proof Your Cloud Storage With Post-Quantum Encryption

    Get 82% off any Internxt lifetime plan—a one-time payment for private, post-quantum encrypted cloud storage.

    No subscriptions, no recurring fees, 30-day money back policy.
    Get this deal 📰 Linux and Open Source News
    The Proton VPN app has received many refinements. Steam Client's April 2025 update has a lot to offer for Linux gamers. Pinta has launched a redesigned website in collaboration with RolandiXor. The APT 3.0 release has finally arrived with a better user experience.
    A Colorful APT 3.0 Release Impresses with its New FeaturesThe latest APT release features a new solver, alongside several user experience enhancements.It's FOSS NewsSourav Rudra🧠 What We’re Thinking About
    Mozilla has begun the initial implementation of AI features into Firefox.
    I Tried This Upcoming AI Feature in FirefoxFirefox will be bringing an experimental AI-generated link previews, offering quick on-device summaries. Here’s my quick experience with it.It's FOSS NewsSourav Rudra🧮 Linux Tips, Tutorials and More
    Get started with Pamac GUI package manager in Arch Linux. A list of photo management software on Linux. Learn how to install Logseq on your Linux system. 7 code editors you can use for Vibe Coding. 7 Code Editors You Can Use for Vibe Coding on LinuxWant to try vibe coding? Here are the best editors I recommend using on Linux.It's FOSSAbhishek Kumar Desktop Linux is mostly neglected by the industry but loved by the community. For the past 12 years, It's FOSS has been helping people use Linux on their personal computers. And we are now facing the existential threat from AI models stealing our content.
    If you like what we do and would love to support our work, please become It's FOSS Plus member. It costs $24 a year (less than the cost of a burger meal each month) and you get an ad-free reading experience with the satisfaction of helping the desktop Linux community.
    Join It's FOSS Plus 👷 Homelab and Maker's Corner
    This time, we have a DIY biosignal tool that can be used for neuroscience research and education purposes.
    DIY Neuroscience: Meet the Open Source PiEEG Kit for Brain and Body SignalsThe PiEEG kit is an open source, portable biosignal tool designed for research, measuring EEG, EMG, EKG, and EOG signals. Want to crowdfund the project?It's FOSS NewsSourav Rudra✨ Apps Highlight
    Clapgrep is a powerful open source search tool for Linux.
    Clapgrep: An Easy-to-Use Open Source Linux App To Search Through Your PDFs and Text DocumentsWant to look for something in your text documents? Use Clapgrep to quick search for it!It's FOSS NewsSourav Rudra📽️ Videos I am Creating for You
    See the new features in APT 3.0 in action in our latest video.
    Subscribe to It's FOSS YouTube Channel🧩 Quiz Time
    Take a trip down memory lane with our 80s Nostalgic Gadgets puzzle.
    80s Nostalgic GadgetsRemember the 80s? This quiz is for you :)It's FOSSAbhishek PrakashHow sharp is your Git knowledge? Our latest crossword will test your knowledge.
    💡 Quick Handy Tip
    In Firefox, you can delete temporary browsing data using the "Forget" button. First, right-click on the toolbar and select "Customize Toolbar".
    Now, from the list, drag and drop the "Forget" button to the toolbar. If you click on it, you will be asked to clear 5 min, 2 hrs, and 24 hrs of browsing data, pick any one of them and click on "Forget!".
    🤣 Meme of the Week
    The glow up is real with this one. 🤭
    🗓️ Tech Trivia
    On April 7, 1964, IBM introduced the System/360, the first family of computers designed to be fully compatible with each other. Unlike earlier systems, where each model had its own unique software and hardware.
    🧑‍🤝‍🧑 FOSSverse Corner
    One of our regular FOSSers played around with ARM64 on Linux and liked it.
    ARM64 on Linux is Fun!Hi, I’ve been playing with my Pinebook Pro lately and tried Armbian, Manjaro, Void and Gentoo on it. It’s been fun! New things learned like boot from u-boot, then moving to tow-boot as “first boot loader” which starts grub. I tried four distroes on a SD, Manjaro was the official and Armbian also was an .iso. Void and Gentoo I installed thrue chroot manually. I’m biased but it says something (at least I think so) that I did a Gentoo install twice to this small laptop. First one was just to try it…It's FOSS Communityihasama❤️ With love
    Share it with your Linux-using friends and encourage them to subscribe (hint: it's here).
    Share the articles in Linux Subreddits and community forums.
    Follow us on Google News and stay updated in your News feed.
    Opt for It's FOSS Plus membership and support us 🙏
    Enjoy FOSS 😄
  24. CSS Carousels

    by: Geoff Graham
    Wed, 09 Apr 2025 13:00:24 +0000

    The CSS Overflow Module Level 5 specification defines a couple of new features that are designed for creating carousel UI patterns:
    Scroll Buttons: Buttons that the browser provides, as in literal <button> elements, that scroll the carousel content 85% of the area when clicked. Scroll Markers: The little dots that act as anchored links, as in literal <a> elements that scroll to a specific carousel item when clicked. Chrome has prototyped these features and released them in Chrome 135. Adam Argyle has a wonderful explainer over at the Chrome Developer blog. Kevin Powell has an equally wonderful video where he follows the explainer. This post is me taking notes from them.
    First, some markup:
    <ul class="carousel"> <li>...</li> <li>...</li> <li>...</li> <li>...</li> <li>...</li> </ul> First, let’s set these up in a CSS auto grid that displays the list items in a single line:
    .carousel { display: grid; grid-auto-flow: column; } We can tailor this so that each list item takes up a specific amount of space, say 40%, and insert a gap between them:
    .carousel { display: grid; grid-auto-flow: column; grid-auto-columns: 40%; gap: 2rem; } This gives us a nice scrolling area to advance through the list items by moving left and right. We can use CSS Scroll Snapping to ensure that scrolling stops on each item in the center rather than scrolling right past them.
    .carousel { display: grid; grid-auto-flow: column; grid-auto-columns: 40%; gap: 2rem; scroll-snap-type: x mandatory; > li { scroll-snap-align: center; } } Kevin adds a little more flourish to the .carousel so that it is easier to see what’s going on. Specifically, he adds a border to the entire thing as well as padding for internal spacing.
    So far, what we have is a super simple slider of sorts where we can either scroll through items horizontally or click the left and right arrows in the scroller.
    We can add scroll buttons to the mix. We get two buttons, one to navigate one direction and one to navigate the other direction, which in this case is left and right, respectively. As you might expect, we get two new pseudo-elements for enabling and styling those buttons:
    ::scroll-button(left) ::scroll-button(right) Interestingly enough, if you crack open DevTools and inspect the scroll buttons, they are actually exposed with logical terms instead, ::scroll-button(inline-start) and ::scroll-button(inline-end).
    And both of those support the CSS content property, which we use to insert a label into the buttons. Let’s keep things simple and stick with “Left” and “Right” as our labels for now:
    .carousel::scroll-button(left) { content: "Left"; } .carousel::scroll-button(right) { content: "Right"; } Now we have two buttons above the carousel. Clicking them either advances the carousel left or right by 85%. Why 85%? I don’t know. And neither does Kevin. That’s just what it says in the specification. I’m sure there’s a good reason for it and we’ll get more light shed on it at some point.
    But clicking the buttons in this specific example will advance the scroll only one list item at a time because we’ve set scroll snapping on it to stop at each item. So, even though the buttons want to advance by 85% of the scrolling area, we’re telling it to stop at each item.
    Remember, this is only supported in Chrome at the time of writing:
    CodePen Embed Fallback We can select both buttons together in CSS, like this:
    .carousel::scroll-button(left), .carousel::scroll-button(right) { /* Styles */ } Or we can use the Universal Selector:
    .carousel::scroll-button(*) { /* Styles */ } And we can even use newer CSS Anchor Positioning to set the left button on the carousel’s left side and the right button on the carousel’s right side:
    .carousel { /* ... */ anchor-name: --carousel; /* define the anchor */ } .carousel::scroll-button(*) { position: fixed; /* set containment on the target */ position-anchor: --carousel; /* set the anchor */ } .carousel::scroll-button(left) { content: "Left"; position-area: center left; } .carousel::scroll-button(right) { content: "Right"; position-area: center right; } Notice what happens when navigating all the way to the left or right of the carousel. The buttons are disabled, indicating that you have reached the end of the scrolling area. Super neat! That’s something that is normally in JavaScript territory, but we’re getting it for free.
    CodePen Embed Fallback Let’s work on the scroll markers, or those little dots that sit below the carousel’s content. Each one is an <a> element anchored to a specific list item in the carousel so that, when clicked, you get scrolled directly to that item.
    We get a new pseudo-element for the entire group of markers called ::scroll-marker-group that we can use to style and position the container. In this case, let’s set Flexbox on the group so that we can display them on a single line and place gaps between them in the center of the carousel’s inline size:
    .carousel::scroll-marker-group { display: flex; justify-content: center; gap: 1rem; } We also get a new scroll-marker-group property that lets us position the group either above (before) the carousel or below (after) it:
    .carousel { /* ... */ scroll-marker-group: after; /* displayed below the content */ } We can style the markers themselves with the new ::scroll-marker pseudo-element:
    .carousel { /* ... */ > li::scroll-marker { content: ""; aspect-ratio: 1; border: 2px solid CanvasText; border-radius: 100%; width: 20px; } } When clicking on a marker, it becomes the “active” item of the bunch, and we get to select and style it with the :target-current pseudo-class:
    li::scroll-marker:target-current { background: CanvasText; } Take a moment to click around the markers. Then take a moment using your keyboard and appreciate that we can all of the benefits of focus states as well as the ability to cycle through the carousel items when reaching the end of the markers. It’s amazing what we’re getting for free in terms of user experience and accessibility.
    CodePen Embed Fallback We can further style the markers when they are hovered or in focus:
    li::scroll-marker:hover, li::scroll-marker:focus-visible { background: LinkText; } And we can “animate” the scrolling effect by setting scroll-behavior: smooth on the scroll snapping. Adam smartly applies it when the user’s motion preferences allow it:
    .carousel { /* ... */ @media (prefers-reduced-motion: no-preference) { scroll-behavior: smooth; } } Buuuuut that seems to break scroll snapping a bit because the scroll buttons are attempting to slide things over by 85% of the scrolling space. Kevin had to fiddle with his grid-auto-columns sizing to get things just right, but showed how Adam’s example took a different sizing approach. It’s a matter of fussing with things to get them just right.
    CodePen Embed Fallback This is just a super early look at CSS Carousels. Remember that this is only supported in Chrome 135+ at the time I’m writing this, and it’s purely experimental. So, play around with it, get familiar with the concepts, and then be open-minded to changes in the future as the CSS Overflow Level 5 specification is updated and other browsers begin building support.
    CSS Carousels originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  25. by: Umair Khurshid
    Tue, 08 Apr 2025 12:11:49 +0530

    Port management in Docker and Docker Compose is essential to properly expose containerized services to the outside world, both in development and production environments.
    Understanding how port mapping works helps avoid conflicts, ensures security, and improves network configuration.
    This tutorial will walk you understand how to configure and map ports effectively in Docker and Docker Compose.
    What is port mapping in Docker?
    Port mapping exposes network services running inside a container to the host, to other containers on the same host, or to other hosts and network devices. It allows you to map a specific port from the host system to a port on the container, making the service accessible from outside the container.
    In the schematic below, there are two separate services running in two containers and both use port 80. Now, their ports are mapped with hosts port 8080 and 8090 and thus they are accessible from outside using these two ports.
    How to map ports in Docker
    Typically, a running container has its own isolated network namespace with its own IP address. By default, containers can communicate with each other and with the host system, but external network access is not automatically enabled.
    Port mapping is used to create communication between the container's isolated network and the host system's network.
    For example, let's map Nginx to port 80:
    docker run -d --publish 8080:80 nginxThe --publish command (usually shortened to -p) is what allows us to create that association between the local port (8080) and the port of interest to us in the container (80).
    In this case, to access it, you simply use a web browser and access http://localhost:8080
    On the other hand, if the image you are using to create the container has made good use of the EXPOSE instructions, you can use the command in this other way:
    docker run -d --publish-all hello-worldDocker takes care of choosing a random port (instead of the port 80 or other specified ports) on your machine to map with those specified in the Dockerfile:
    Mapping ports with Docker Compose
    Docker Compose allows you to define container configurations in a docker-compose.yml. To map ports, you use the ports YAML directive.
    version: '3.8' services: web: image: nginx:latest ports: - "8080:80" In this example, as in the previous case, the Nginx container will expose port 80 on the host's port 8080.
    Port mapping vs. exposing
    It is important not to confuse the use of portswith expose directives. The former creates a true port forwarding to the outside. The latter only serves to document that an internal port is being used by the container, but does not create any exposure to the host.
    services: app: image: myapp expose: - "3000" In this example, port 3000 will only be accessible from other containers in the same Docker network, but not from outside.
    Mapping Multiple Ports
    You just saw how to map a single port, but Docker also allows you to map more than one port at a time. This is useful when your container needs to expose multiple services on different ports.
    Let's configure a nginx server to work in a dual stack environment:
    docker run -p 8080:80 -p 443:443 nginx Now the server to listen for both HTTP traffic on port 8080, mapped to port 80 inside the container and HTTPS traffic on port 443, mapped to port 443 inside the container.
    Specifying host IP address for port binding
    By default, Docker binds container ports to all available IP addresses on the host machine. If you need to bind a port to a specific IP address on the host, you can specify that IP in the command. This is useful when you have multiple network interfaces or want to restrict access to specific IPs.
    docker run -p 192.168.1.100:8080:80 nginx This command binds port 8080 on the specific IP address 192.168.1.100 to port 80 inside the container.
    Port range mapping
    Sometimes, you may need to map a range of ports instead of a single port. Docker allows this by specifying a range for both the host and container ports. For example,
    docker run -p 5000-5100:5000-5100 nginx This command maps a range of ports from 5000 to 5100 on the host to the same range inside the container. This is particularly useful when running services that need multiple ports, like a cluster of servers or applications with several endpoints.
    Using different ports for host and container
    In situations where you need to avoid conflicts, security concerns, or manage different environments, you may want to map different port numbers between the host machine and the container. This can be useful if the container uses a default port, but you want to expose it on a different port on the host to avoid conflicts.
    docker run -p 8081:80 nginx This command maps port 8081 on the host to port 80 inside the container. Here, the container is still running its web server on port 80, but it is exposed on port 8081 on the host machine.
    Binding to UDP ports (if you need that)
    By default, Docker maps TCP ports. However, you can also map UDP ports if your application uses UDP. This is common for protocols and applications that require low latency, real-time communication, or broadcast-based communication.
    For example, DNS uses UDP for query and response communication due to its speed and low overhead. If you are running a DNS server inside a Docker container, you would need to map UDP ports.
    docker run -p 53:53/udp ubuntu/bind9 Here this command maps UDP port 53 on the host to UDP port 53 inside the container.
    Inspecting and verifying port mapping
    Once you have set up port mapping, you may want to verify that it’s working as expected. Docker provides several tools for inspecting and troubleshooting port mappings.
    To list all active containers and see their port mappings, use the docker ps command. The output includes a PORTS column that shows the mapping between the host and container ports.
    docker ps This might output something like:
    If you more detailed information about a container's port mappings, you can use docker inspect. This command gives you a JSON output with detailed information about the container's configuration.
    docker inspect <container_id> | grep "Host" This command will display the port mappings, such as:
    Wrapping Up
    Learn Docker: Complete Beginner’s CourseLearn Docker, an important skill to have for any DevOps and modern sysadmin. Learn all the essentials of Docker in this series.Linux HandbookAbdullah TarekWhen you are first learning Docker, one of the more tricky topics is often networking and port mapping. I hope this rundown has helped clarify how port mapping works and how you can effectively use it to connect your containers to the outside world and orchestrate services across different environments.

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.