Jump to content

All Activity

This stream auto-updates

  1. Today
  2. Blogger posted a blog entry in Programmer's Corner
    by: Geoff Graham Wed, 09 Apr 2025 13:00:24 +0000 The CSS Overflow Module Level 5 specification defines a couple of new features that are designed for creating carousel UI patterns: Scroll Buttons: Buttons that the browser provides, as in literal <button> elements, that scroll the carousel content 85% of the area when clicked. Scroll Markers: The little dots that act as anchored links, as in literal <a> elements that scroll to a specific carousel item when clicked. Chrome has prototyped these features and released them in Chrome 135. Adam Argyle has a wonderful explainer over at the Chrome Developer blog. Kevin Powell has an equally wonderful video where he follows the explainer. This post is me taking notes from them. First, some markup: <ul class="carousel"> <li>...</li> <li>...</li> <li>...</li> <li>...</li> <li>...</li> </ul> First, let’s set these up in a CSS auto grid that displays the list items in a single line: .carousel { display: grid; grid-auto-flow: column; } We can tailor this so that each list item takes up a specific amount of space, say 40%, and insert a gap between them: .carousel { display: grid; grid-auto-flow: column; grid-auto-columns: 40%; gap: 2rem; } This gives us a nice scrolling area to advance through the list items by moving left and right. We can use CSS Scroll Snapping to ensure that scrolling stops on each item in the center rather than scrolling right past them. .carousel { display: grid; grid-auto-flow: column; grid-auto-columns: 40%; gap: 2rem; scroll-snap-type: x mandatory; > li { scroll-snap-align: center; } } Kevin adds a little more flourish to the .carousel so that it is easier to see what’s going on. Specifically, he adds a border to the entire thing as well as padding for internal spacing. So far, what we have is a super simple slider of sorts where we can either scroll through items horizontally or click the left and right arrows in the scroller. We can add scroll buttons to the mix. We get two buttons, one to navigate one direction and one to navigate the other direction, which in this case is left and right, respectively. As you might expect, we get two new pseudo-elements for enabling and styling those buttons: ::scroll-button(left) ::scroll-button(right) Interestingly enough, if you crack open DevTools and inspect the scroll buttons, they are actually exposed with logical terms instead, ::scroll-button(inline-start) and ::scroll-button(inline-end). And both of those support the CSS content property, which we use to insert a label into the buttons. Let’s keep things simple and stick with “Left” and “Right” as our labels for now: .carousel::scroll-button(left) { content: "Left"; } .carousel::scroll-button(right) { content: "Right"; } Now we have two buttons above the carousel. Clicking them either advances the carousel left or right by 85%. Why 85%? I don’t know. And neither does Kevin. That’s just what it says in the specification. I’m sure there’s a good reason for it and we’ll get more light shed on it at some point. But clicking the buttons in this specific example will advance the scroll only one list item at a time because we’ve set scroll snapping on it to stop at each item. So, even though the buttons want to advance by 85% of the scrolling area, we’re telling it to stop at each item. Remember, this is only supported in Chrome at the time of writing: CodePen Embed Fallback We can select both buttons together in CSS, like this: .carousel::scroll-button(left), .carousel::scroll-button(right) { /* Styles */ } Or we can use the Universal Selector: .carousel::scroll-button(*) { /* Styles */ } And we can even use newer CSS Anchor Positioning to set the left button on the carousel’s left side and the right button on the carousel’s right side: .carousel { /* ... */ anchor-name: --carousel; /* define the anchor */ } .carousel::scroll-button(*) { position: fixed; /* set containment on the target */ position-anchor: --carousel; /* set the anchor */ } .carousel::scroll-button(left) { content: "Left"; position-area: center left; } .carousel::scroll-button(right) { content: "Right"; position-area: center right; } Notice what happens when navigating all the way to the left or right of the carousel. The buttons are disabled, indicating that you have reached the end of the scrolling area. Super neat! That’s something that is normally in JavaScript territory, but we’re getting it for free. CodePen Embed Fallback Let’s work on the scroll markers, or those little dots that sit below the carousel’s content. Each one is an <a> element anchored to a specific list item in the carousel so that, when clicked, you get scrolled directly to that item. We get a new pseudo-element for the entire group of markers called ::scroll-marker-group that we can use to style and position the container. In this case, let’s set Flexbox on the group so that we can display them on a single line and place gaps between them in the center of the carousel’s inline size: .carousel::scroll-marker-group { display: flex; justify-content: center; gap: 1rem; } We also get a new scroll-marker-group property that lets us position the group either above (before) the carousel or below (after) it: .carousel { /* ... */ scroll-marker-group: after; /* displayed below the content */ } We can style the markers themselves with the new ::scroll-marker pseudo-element: .carousel { /* ... */ > li::scroll-marker { content: ""; aspect-ratio: 1; border: 2px solid CanvasText; border-radius: 100%; width: 20px; } } When clicking on a marker, it becomes the “active” item of the bunch, and we get to select and style it with the :target-current pseudo-class: li::scroll-marker:target-current { background: CanvasText; } Take a moment to click around the markers. Then take a moment using your keyboard and appreciate that we can all of the benefits of focus states as well as the ability to cycle through the carousel items when reaching the end of the markers. It’s amazing what we’re getting for free in terms of user experience and accessibility. CodePen Embed Fallback We can further style the markers when they are hovered or in focus: li::scroll-marker:hover, li::scroll-marker:focus-visible { background: LinkText; } And we can “animate” the scrolling effect by setting scroll-behavior: smooth on the scroll snapping. Adam smartly applies it when the user’s motion preferences allow it: .carousel { /* ... */ @media (prefers-reduced-motion: no-preference) { scroll-behavior: smooth; } } Buuuuut that seems to break scroll snapping a bit because the scroll buttons are attempting to slide things over by 85% of the scrolling space. Kevin had to fiddle with his grid-auto-columns sizing to get things just right, but showed how Adam’s example took a different sizing approach. It’s a matter of fussing with things to get them just right. CodePen Embed Fallback This is just a super early look at CSS Carousels. Remember that this is only supported in Chrome 135+ at the time I’m writing this, and it’s purely experimental. So, play around with it, get familiar with the concepts, and then be open-minded to changes in the future as the CSS Overflow Level 5 specification is updated and other browsers begin building support. CSS Carousels originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  3. Yesterday
  4. by: Umair Khurshid Tue, 08 Apr 2025 12:11:49 +0530 Port management in Docker and Docker Compose is essential to properly expose containerized services to the outside world, both in development and production environments. Understanding how port mapping works helps avoid conflicts, ensures security, and improves network configuration. This tutorial will walk you understand how to configure and map ports effectively in Docker and Docker Compose. What is port mapping in Docker?Port mapping exposes network services running inside a container to the host, to other containers on the same host, or to other hosts and network devices. It allows you to map a specific port from the host system to a port on the container, making the service accessible from outside the container. In the schematic below, there are two separate services running in two containers and both use port 80. Now, their ports are mapped with hosts port 8080 and 8090 and thus they are accessible from outside using these two ports. How to map ports in DockerTypically, a running container has its own isolated network namespace with its own IP address. By default, containers can communicate with each other and with the host system, but external network access is not automatically enabled. Port mapping is used to create communication between the container's isolated network and the host system's network. For example, let's map Nginx to port 80: docker run -d --publish 8080:80 nginxThe --publish command (usually shortened to -p) is what allows us to create that association between the local port (8080) and the port of interest to us in the container (80). In this case, to access it, you simply use a web browser and access http://localhost:8080 On the other hand, if the image you are using to create the container has made good use of the EXPOSE instructions, you can use the command in this other way: docker run -d --publish-all hello-worldDocker takes care of choosing a random port (instead of the port 80 or other specified ports) on your machine to map with those specified in the Dockerfile: Mapping ports with Docker ComposeDocker Compose allows you to define container configurations in a docker-compose.yml. To map ports, you use the ports YAML directive. version: '3.8' services: web: image: nginx:latest ports: - "8080:80" In this example, as in the previous case, the Nginx container will expose port 80 on the host's port 8080. Port mapping vs. exposingIt is important not to confuse the use of portswith expose directives. The former creates a true port forwarding to the outside. The latter only serves to document that an internal port is being used by the container, but does not create any exposure to the host. services: app: image: myapp expose: - "3000" In this example, port 3000 will only be accessible from other containers in the same Docker network, but not from outside. Mapping Multiple PortsYou just saw how to map a single port, but Docker also allows you to map more than one port at a time. This is useful when your container needs to expose multiple services on different ports. Let's configure a nginx server to work in a dual stack environment: docker run -p 8080:80 -p 443:443 nginx Now the server to listen for both HTTP traffic on port 8080, mapped to port 80 inside the container and HTTPS traffic on port 443, mapped to port 443 inside the container. Specifying host IP address for port bindingBy default, Docker binds container ports to all available IP addresses on the host machine. If you need to bind a port to a specific IP address on the host, you can specify that IP in the command. This is useful when you have multiple network interfaces or want to restrict access to specific IPs. docker run -p 192.168.1.100:8080:80 nginx This command binds port 8080 on the specific IP address 192.168.1.100 to port 80 inside the container. Port range mappingSometimes, you may need to map a range of ports instead of a single port. Docker allows this by specifying a range for both the host and container ports. For example, docker run -p 5000-5100:5000-5100 nginx This command maps a range of ports from 5000 to 5100 on the host to the same range inside the container. This is particularly useful when running services that need multiple ports, like a cluster of servers or applications with several endpoints. Using different ports for host and containerIn situations where you need to avoid conflicts, security concerns, or manage different environments, you may want to map different port numbers between the host machine and the container. This can be useful if the container uses a default port, but you want to expose it on a different port on the host to avoid conflicts. docker run -p 8081:80 nginx This command maps port 8081 on the host to port 80 inside the container. Here, the container is still running its web server on port 80, but it is exposed on port 8081 on the host machine. Binding to UDP ports (if you need that)By default, Docker maps TCP ports. However, you can also map UDP ports if your application uses UDP. This is common for protocols and applications that require low latency, real-time communication, or broadcast-based communication. For example, DNS uses UDP for query and response communication due to its speed and low overhead. If you are running a DNS server inside a Docker container, you would need to map UDP ports. docker run -p 53:53/udp ubuntu/bind9 Here this command maps UDP port 53 on the host to UDP port 53 inside the container. Inspecting and verifying port mappingOnce you have set up port mapping, you may want to verify that it’s working as expected. Docker provides several tools for inspecting and troubleshooting port mappings. To list all active containers and see their port mappings, use the docker ps command. The output includes a PORTS column that shows the mapping between the host and container ports. docker ps This might output something like: If you more detailed information about a container's port mappings, you can use docker inspect. This command gives you a JSON output with detailed information about the container's configuration. docker inspect <container_id> | grep "Host" This command will display the port mappings, such as: Wrapping UpLearn Docker: Complete Beginner’s CourseLearn Docker, an important skill to have for any DevOps and modern sysadmin. Learn all the essentials of Docker in this series.Linux HandbookAbdullah TarekWhen you are first learning Docker, one of the more tricky topics is often networking and port mapping. I hope this rundown has helped clarify how port mapping works and how you can effectively use it to connect your containers to the outside world and orchestrate services across different environments.
  5. Last week
  6. by: Chris Coyier Mon, 07 Apr 2025 17:01:01 +0000 Love HTML? Good. It’s very lovable. One of my favorite parts is how you can screw it all up and it still it’s absolute best to render how it thinks you meant. Not a lot of other languages like that. Are there any? English, I suppose lolz. Anyway — I figured I’d just share 10 links about HTML that I’ve save because, well, I personally thought there were interesting and enjoyed learning what they had to teach. The selected date must be within the last 10 years by Gerardo Rodriguez — That’s the requirement. How do you do it? Definitely use the HTML validation features for the date input. But unfortunately, JavaScript will need to be involved to set them, which means timezones and all that. Just because you can doesn’t mean you should: the <meter> element by Sophie Koonin — I think the HTML Heat the oven to <meter min="200" max="500" value="350">350 degrees</meter> is hilarious(ly silly). As Sophie puts it, this is the “letter, not the spirit, of semantic HTML.” Fine-tuning Text Inputs by Garrett Dimon — One of those articles that reminds you that there are actually quite a few bonus attributes for HTML inputs and they are all actually pretty useful and you should consider using them. How to think about HTML responsive images by Dan Cătălin Burzo — Speaking of loads of fairly complex attributes, it’s hard to beat responsive images in that regard. Always interesting to read something thinking through it all. This always leads me to the conclusion that it absolutely needs to be automated. Styling an HTML dialog modal to take the full height of the viewport by Simon Willison — Sometimes unexpected browser styles can bite you. DevTools help uncover that of course… if you can see them. I was once very confused about weird behaving dialogs when I set it to be position: relative; (they are fixed by browser styles), so watch for that too. Foundations: grouping forms with <fieldset> and <legend> by Demelza Feltham — “Accessible grouping benefits everyone”. Let’s clean up the weird styling of fieldset while we’re at it. HTML Whitespace is Broken by Douglas Parker — I think a lot of us have a developed intuition of how HTML uses or ignores whitespace, but Douglas manages to point out some real oddities I hadn’t thought of clearly defining before. Like if there is a space both before and after a </a>, the space will collapse, but with the single space be part of the link or not? Formatters struggle with this as their formatting can introduce output changes. It’s one reason I like JSX because it’s ultra opinionated on formatting and how spacing is handled. A highly configurable switch component using modern CSS techniques by Andy Bell — We’ve got <input type="checkbox" switch> coming (it’s in Safari now), but if you can’t wait you can build your own, as long as you are careful. Building a progress-indicating scroll-to-top button in HTML and CSS by Manuel Matuzović — I like that trick where you can use scroll-driven animations to only reveal a “scroll back to the top” button after you’ve scrolled down a bit. I also like Manuel’s idea here where the button itself fills up as you scroll down. I generally don’t care for scroll progress indicators, but this is so subtle it’s cool. Test-driven HTML and accessibility by David Luhr — I’ve written Cypress tests before and this echos of that but feels kinda lower level. It’s interesting looking at Server JavaScript executing DOM JavaScript with expect tests. I suppose it’s a bit like building your own aXe.
  7. by: Sreenath Mon, 07 Apr 2025 16:18:54 GMT Logseq is a versatile open source tool for knowledge management. It is regarded as one of the best open source alternatives to the popular proprietary tool Obsidian. While it covers the basics of note-taking, it also doubles down as a powerful task manager and journaling tool. Logseq DesktopWhat sets Logseq apart from traditional note-taking apps is its unique organization system, which forgoes hierarchical folder structures in favor of interconnected, block-based notes. This makes it an excellent choice for users seeking granular control and flexibility over their information. In this article, we’ll explore how to install Logseq on Linux distributions. Use the official AppImageFor Linux systems, Logseq officially provides an AppImage. You can head over to the downloads page and grab the AppImage file. Download LogseqIt is advised to use tools like AppImageLauncher (hasn't seen a new release for a while, but it is active) or GearLever to create a desktop integration for Logseq. Fret not, if you would rather not use a third-party tool, you can do it yourself as well. First, create a folder in your home directory to store all the AppImages. Next, move the Logseq AppImage to this location and give the file execution permission. Go to AppImage propertiesRight-click on the AppImage file and go to the file properties. Here, in the Permissions tab, select "Allow Executing as a Program" or "Executable as Program" depending on the distro, but it has the same meaning. Here's how it looks on a distribution with GNOME desktop: Toggle Execution permissionOnce done, you can double-click to open Logseq app. 🚧If you are using Ubuntu 24.04 and above, you won't be able to open the AppImage of Logseq due to a change in the apparmour policy. You can either use other sources like Flatpak or take a look at a less secure alternative.Alternatively, use the 'semi-official' FlatpakLogseq has a Flatpak version available. This is not an official offering from the Logseq team, but is provided by a developer who also contributes to Logseq. First, make sure your system has Flatpak support. If not, enable Flatpak support and add Flathub repository by following our guide: Using Flatpak on Linux [Complete Guide]Learn all the essentials for managing Flatpak packages in this beginner’s guide.It's FOSSAbhishek PrakashNow, install Logseq either from a Flatpak supported software center like GNOME Software: Install Logseq from GNOME SoftwareOr install it using the terminal with the following command: flatpak install flathub com.logseq.LogseqOther methodsFor Ubuntu users and those who have Snap setup, there is an unofficial Logseq client in the Snap store. You can go with that if you prefer. There are also packages available in the AUR for Logseq desktop clients. Arch Linux users can take a look at these packages and get it installed via the terminal using Pamac package manager. Post InstallationOnce you have installed Logseq, open it. This will bring you to the temporary journal page. You need to open a local folder for Logseq to start your work to avoid potential data loss. For this, click on the "Add a graph" button on the top-right, as shown in the screenshot below. Click on "Add a graph"On the resulting page, click on "Choose a folder" button. Click "Choose a folder" From the file chooser, either create a new directory or select an existing directory and click "Open". Select a locationThat's it. You can start using Logseq now. And I'll help you with that. I'll be sharing regular tutorials on using Logseq for the next few days/weeks here. Stay tuned.
  8. by: Lee Meyer Mon, 07 Apr 2025 14:41:53 +0000 When I was young and dinosaurs walked the earth, I worked on a software team that developed a web-based product for two years before ever releasing it. I don’t just mean we didn’t make it publicly available; we didn’t deploy it anywhere except for a test machine in the office, accessed by two internal testers, and this required a change to each tester’s hosts file. You don’t have to be an agile evangelist to spot the red flag. There’s “release early, release often,” which seemed revolutionary the first time I heard it after living under a waterfall for years, or there’s building so much while waiting so long to deploy that you guarantee weird surprises in a realistic deployment, let alone when you get feedback from real users. I’m told the first deployment experience to a web farm was very special. A tale of a dodgy deployment Being a junior, I was spared being involved in the first deployment. But towards the end of the first three-month cycle of fixes, the team leader asked me, “Would you be available on Tuesday at 2 a.m. to come to the office and do a deployment?” “Yep, sure, no worries.” I went home thinking what a funny dude my team leader was. So on Tuesday at 9 a.m., I show up and say good morning to the team leader and the architect, who sit together staring at one computer. I sit down at my dev machine and start typing. “Man, what happened?” the team leader says over the partition. “You said you’d be here at 2 a.m.” I look at him and see he is not smiling. I say, ”Oh. I thought you were joking.” “I was not joking, and we have a massive problem with the deployment.” Uh-oh. I was junior and did not have the combined forty years of engineering experience of the team leader and architect, but what I had that they lacked was a well-rested brain, so I found the problem rather quickly: It was a code change the dev manager had made to the way we handled cookies, which didn’t show a problem on the internal test server but broke the world on the real web servers. Perhaps my finding the issue was the only thing that saved me from getting a stern lecture. By the time I left years later, it was just a funny story the dev manager shared in my farewell speech, along with nice compliments about what I had achieved for the company — I also accepted an offer to work for the company again later. Breaking news: Human beings need sleep I am sure the two seniors would have been capable of spotting the problem under different circumstances. They had a lot working against them: Sleep deprivation, together with the miscommunication about who would be present, would’ve contributed to feelings of panic, which the outage would’ve exacerbated after they powered through and deployed without me. More importantly, they didn’t know whether the problem was in the new code or human error in their manual deployment process of copying zipped binaries and website files to multiple servers, manually updating config files, comparing and updating database schemas — all in the wee hours of the morning. They were sleepily searching for a needle in a haystack of their own making. The haystack wouldn’t have existed if they had a proven automated deployment process, and if they could be sure the problem could only reside in the code they deployed. There was no reason everything they were doing couldn’t be scripted. They could’ve woken up at 6 a.m. instead of 2 a.m. to verify the automated release of the website before shifting traffic to it and fix any problems that became evident in their release without disrupting real users. The company would get a more stable website and the expensive developers would have more time to focus on developing. If you manually deploy overnight, and then drive, you’re a bloody idiot The 2 a.m. deployments might seem funny if it wasn’t your night to attend and if you have a dark sense of humor. In the subsequent years, I attended many 2 a.m. deployments to atone for the one I slept through. The company paid for breakfast on those days, and if we proved the deployment was working, we could leave for the day and catch up on sleep, assuming we survived the drive home and didn’t end up sleeping forever. The manual deployment checklist was perpetually incomplete and out-of-date, yet the process was never blamed for snafus on deployment days. In reality, sometimes it was a direct consequence of the fallibility of manually working from an inaccurate checklist. Sometimes manual deployment wasn’t directly the culprit, but it made pinpointing the problem or deciding whether to roll back unnecessarily challenging. And you knew rolling back would mean forgoing your sleep again the next day so you’d have a mounting sleep debt working against you. I learned a lot from that team and the complex features I had the opportunity to build. But the deployment process was a step backward from my internship doing Windows programming because in that job I had to write installers so my code would work on user machines, which by nature of the task, I didn’t have access to. When web development removes that inherent limitation, it’s like a devil on your shoulder tempting you to do what seems easy in the moment and update production from your dev machine. You know you want to, especially when the full deployment process is hard and people want a fix straightaway. This is why if you automate deployments, you want to lock things down so that the automated process is the only way to deploy changes. As I became more senior and had more say in how these processes happened at my workplace, I started researching — and I found it easy to relate to the shots taken at manual deployers, such as this presentation titled “Octopus Deploy and how to stop deploying like an idiot” and Octopus Deploy founder Paul Stovell’s sentiments on how to deploy database updates: “Your database isn’t in source control? You don’t deserve one. Go use Excel.” This approach to giving developers a kick in their complacency reminds me of the long-running anti-drunk driving ads here in Australia with the slogan “If you drink then drive, you’re a bloody idiot,” which scared people straight by insulting them for destructive life choices. In the “Stop deploying like an idiot” talk, Damian Brady insults a hypothetical deployment manager at Acme Corp named Frank, who keeps being a hero by introducing risk and wasted time to a process that could be automated using Octopus, which would never make stupid mistakes like overwriting the config file. “Frank’s pretty proud of his process in general,” says Damian. “Frank’s an idiot.” Why are people like this? Frankly, some of the Franks I have worked with were something worse than idiots. Comedian Jim Jeffries has a bit in which he says he’d take a nice idiot over a clever bastard. Frank’s a cunning bastard wolf in idiotic sheep’s clothing — the demographic of people who work in software shows above average IQ, and a person appointed “deployment manager” will have googled the options to make this task easier, but he chose not to use them. The thing is, Frank gets to seem important, make other devs look and feel stupid when they try to follow his process while he’s on leave, and even when he is working he gets to take overseas trips to hang with clients because he is the only one who can get the product working on a new server. Companies must be careful which behaviors they reward, and Conway’s law applies to deployment processes. What I learned by being forced to do deployments manually To an extent, the process reflecting hierarchy and division of responsibility is normal and necessary, which is why Octopus Deploy has built-in manual intervention and approval steps. But also, some of the motivations to stick with manual deployments are nonsense. Complex manual deployments are still more widespread than they need to be, which makes me feel bad for the developers who still live like me back in the 2000s — if you call that living. I guess there is an argument for the team-building experiences in those 2 a.m. deployments, much like deployments in the military sense of the word may teach the troops some valuable life lessons, even if the purported reason for the battle isn’t the real reason, and the costs turn out to be higher than anyone expects. It reminds me of a tour I had the good fortune to take in 2023 of the Adobe San Jose offices, in which a “Photoshop floor” includes time capsule conference rooms representing different periods in Photoshop’s history, including a 90’s room with a working Macintosh Classic running Photoshop 1.0. The past is an interesting and instructive place to visit but not somewhere you’d want to live in 2025. Even so, my experience of Flintsones-style deployments gave me an appreciation for the ways a tool like Octopus Deploy automates everything I was forced to do manually in the past, which kept my motivation up when I was working through the teething problems once I was tasked with transforming a manual deployment process into an automated process. This appreciation for the value proposition of a tool like Octopus Deploy was why I later jumped at the opportunity to work for Octopus in 2021. What I learned working for Octopus Deploy The first thing I noticed was how friendly the devs were and the smoothness of the onboarding process, with only one small manual change to make the code run correctly in Docker on my dev box. The second thing I noticed was that this wasn’t heaven, and there were flaky integration tests, slow builds, and cake file output that hid the informative build errors. In fairness, at the time Octopus was in a period of learning how to upscale. There was a whole project I eventually joined to performance-tune the integration tests and Octopus itself. As an Octopus user, the product had seemed as close to magic as we were likely to find, compared to the hell we had to go through without a proper deployment tool. Yet there’s something heartening about knowing nobody has a flawless codebase, and even Octopus Deploy has some smelly code they have to deal with and suboptimal deployments of some stuff. Once I made my peace with the fact that there’s no silver bullet that magically and perfectly solves any aspect of software, including deployments, my hot take is that deploying like an idiot comes down to a mismatch between the tools you use to deploy and the reward in complexity reduced versus complexity added. Therefore, one example of deploying like an idiot is the story I opened with, in which team members routinely drove to the office at 2 a.m. to manually deploy a complicated website involving database changes, background processes, web farms, and SLAs. But another example of deploying like an idiot might be a solo developer with a side project who sets up Azure Devops to push to Octopus Deploy and pays more than necessary in money and cognitive load. Indeed, Octopus is a deceptively complex tool that can automate anything, not only deployments, but the complexity comes at the price of a learning curve and the risk of decision fatigue. For instance, when I used my “sharpening time” (the Octopus term for side-project time) to explore ways to deploy a JavaScript library, I found at least two different ways to do it in Octopus, depending on whether it’s acceptable to automate upgrading all your consumers to the latest version of your library or whether you need more control of versioning per consumer. Sidenote: the Angry Birds Octopus parody that Octopus marketing created to go with my “consumers of your JavaScript library as tenants” article was a highlight of my time at Octopus — I wish we could have made it playable like a Google Doodle. Nowadays I see automation as a spectrum for how automatic and sophisticated you need things to be, somewhat separate from the choice of tools. The challenge is locating that sweet spot, where automation makes your life easier versus the cost of licensing fees and the time and energy you need to devote to working on the deployment process. Octopus Deploy might be at one end of the spectrum of automated deployments when you need lots of control over a complicated automatic process. On the other end of the spectrum, the guy who runs Can I Use found that adopting git-ftp was a life upgrade from manually copying the modified files to his web server while keeping his process simple and not spending a lot of energy on more sophisticated deployment systems. Somewhere in the middle reside things like Bitbucket Pipelines or GitHub Actions, which are more automated and sophisticated than just git-ftp from your dev machine, but less complicated than Octopus together with TeamCity, which could be overkill on a simple project. The complexity of deployment might be something to consider when defining your architecture, similar to how planning poker can trigger a business to rethink the value of certain features once they obtain holistic feedback from the team on the overall cost. For instance, you might assume you need a database, but when you factor in the complexity it adds to roll-outs, you may be motivated to rethink whether your use case truly needs a database. What about serverless? Does serverless solve our problems given it’s supposed to eliminate the need to worry about how the server works? Reminder: Serverless isn’t serverless It should be uncontroversial to say that “serverless” is a misnomer, but how much this inaccuracy matters is debatable. I’ll give this analogy for why I think the name “serverless” is a problem: Early cars had a right to call themselves “horseless carriages” because they were a paradigm shift that meant your carriage could move without a horse. “Driverless cars” shouldn’t be called that, because they don’t remove the need for a driver; it’s just that the driver is an AI. “Self-driving car” is therefore a better name. Self-driving cars often work well, but completely ignoring the limitations of how they work can be fatal. When you unpack the term “serverless,” it’s like a purportedly horseless carriage still pulled by horse — but the driver claims his feeding and handling of the horse will be managed so well, the carriage will be so insulated from neighing and horse flatulence, passengers will feel as if the horse doesn’t exist. My counterargument is that the reality of the horse is bound to affect the passenger experience sooner or later. For example, one of my hobby projects was a rap battle chat deployed to Firebase. I needed the Firebase cloud function to calculate the score for each post using the same rhyme detection algorithm I used to power the front end. This worked fine in testing when I ran the Firebase function using the Cloud Functions emulator — but it performed unacceptably after my first deployment due to a cold start (loading the pronunciation dictionary was the likely culprit if you’re wondering). Much like my experiences in the 2000s, my code behaved dramatically differently on my dev machine than on the real Firebase, almost as though there is still a server I can’t pretend doesn’t exist — but now I had limited ability to tweak it. One way to fix it was to throw money at the problem. That serverless experience reminds me of a scene in the science fiction novel Rainbows End in which the curmudgeonly main character cuts open a car that isn’t designed to be serviced, only to find that all the components inside are labeled “No user-serviceable parts within.” He’s assured that even if he could cut open those parts, the car is “Russian dolls all the way down.” One of the other characters asks him: “Who’d want to change them once they’re made? Just trash ’em if they’re not working like you want.” I don’t want to seem like a curmudgeon — but my point is that while something like Firebase offers many conveniences and can simplify deployment and configuration, it can also move the problem to knowing which services are appropriate to pay extra for. And you may find your options are limited when things go wrong with a deployment or any other part of web development. Deploying this article Since I love self-referential twist endings, I’ll point out that even publishing an article like this has a variety of possible “deployment processes.” For instance, Octopus uses Jekyll for their blog. You make a branch with the markdown of your proposed blog post, and then marketing proposes changes before setting a publication date and merging. The relevant automated process will handle publication from there. This process has the advantage of using familiar tools for collaborating on changes to a file — but it might not feel approachable to teams not comfortable with Git, and it also might not be immediately apparent how to preview the final article as it will appear on the website. On the other hand, when I create an article for CSS-Tricks, I use Dropbox Paper to create my initial draft, then send it to Geoff Graham, who makes edits, for which I get notifications. Once we have confirmed via email that we’re happy with the article, he manually ports it to Markdown in WordPress, then sends me a link to a pre-live version on the site to check before the article is scheduled for publication. It’s a manual process, so I sometimes find problems even in this “release” of static content collaborated by only two people — but you gotta weigh how much risk there is of mistakes against how much value there would be in fully automating the process. With anything you have to publish on the web, keep searching for that sweet spot of elegance, risk, and the reward-to-effort ratio. Feeling Like I Have No Release: A Journey Towards Sane Deployments originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  9. by: Team LHB Mon, 07 Apr 2025 17:16:55 +0530 After years of training DevOps students and taking interviews for various positions, I have compiled this list of Docker interview ques tions (with answers) that are generally asked in the technical round. I have categorized them into various levels: Entry level (very basic Docker questions)Mid-level (slightly deep in Docker)Senior-level (advanced level Docker knowledge)Common for all (generic Docker stuff for all)Practice Dockerfile examples with optimization challenge (you should love this)If you are absolutely new to Docker, I highly recommend our Docker course for beginners. Learn Docker: Complete Beginner’s CourseLearn Docker, an important skill to have for any DevOps and modern sysadmin. Learn all the essentials of Docker in this series.Linux HandbookAbdullah TarekLet's go. Entry level Docker questionsWhat is Docker? Docker is a containerization platform that allows you to package an application and its dependencies into a container. Unlike virtualization, Docker containers share the host OS kernel, making them more lightweight and efficient. What is Containerization? It’s a way to package software in a format that can run isolated on a shared OS. What are Containers? Containers are packages which contains application with all its needs such as libraries and dependencies What is Docker image? Docker image is a read-only template that contains a set of instructions for creating a container that can run on the Docker platform.It provides a convenient way to package up applications and preconfigured server environments, which you can use for your own private use or share publicly with other Docker users.What is Docker Compose? It is a tool for defining and running multi-container Docker applications. What’s the difference between virtualization and containerization? Virtualization abstracts the entire machine with separate VMs, while containerization abstracts the application with lightweight containers sharing the host OS. Describe a Docker container’s lifecycle Create | Run | Pause | Unpause | Start | Stop | Restart | Kill | Destroy What is a volume in Docker, and which command do you use to create it? A volume in Docker is a persistent storage mechanism that allows data to be stored and accessed independently of the container's lifecycle.Volumes enable you to share data between containers or persist data even after a container is stopped or removed.docker volume create <volume_name>TO KNOW : example :- docker run -v data_volume:/var/lib/mysql mysqlWhat is Docker Swarm? Docker Swarm is a tool for clustering & managing containers across multiple hosts. How do you remove unused data in Docker? Use docker system prune to remove unused data, including stopped containers, unused networks, and dangling images. Mid-level Docker QuestionsWhat command retrieves detailed information about a Docker container? Use docker inspect <container_id> to get detailed JSON information about a specific Docker container. How do the Docker Daemon and Docker Client interact? The Docker Client communicates with the Docker Daemon through a REST API over a Unix socket or TCP/IP How can you set CPU and memory limits for a Docker container? Use docker run --memory="512m" --cpus="1.5" <image> to set memory and CPU limits. Can a Docker container be configured to restart automatically? Yes, a Docker container can be configured to restart automatically using restart policies such as --restart always or --restart unless-stopped. What methods can you use to debug issues in a Docker container? Inspect logs with docker logs <container_id> to view output and error messages.Execute commands interactively using docker exec -it <container_id> /bin/bash to access the container's shell.Check container status and configuration with docker inspect <container_id>. Monitor resource usage with docker stats to view real-time performance metrics.Use Docker's built-in debugging tools and third-party monitoring solutions for deeper analysis.What is the purpose of Docker Secrets? Docker Secrets securely manage sensitive data like passwords for Docker services. Use docker secret create <secret_name> <file> to add secrets. What are the different types of networks in Docker, and how do they differ? Docker provides several types of networks to manage how containers communicate with each other and with external systems. Here are the main types: BridgeNoneHostOverlay NetworkMacvlan NetworkIPvlan Networkbridge: This is the default network mode. Each container connected to a bridge network gets its own IP address and can communicate with other containers on the same bridge network using this IP docker run ubuntuUseful for scenarios where you want isolated containers to communicate through a shared internal network. none: Containers attached to the none network are not connected to any network. They don't have any network interfaces except the loopback interface (lo). docker run ubuntu --network=noneUseful when you want to create a container with no external network access for security reasons. host: The container shares the network stack of the Docker host, which means it has direct access to the host's network interfaces. There's no isolation between the container and the host network. docker run ubuntu --network=hostUseful when you need the highest possible network performance, or when you need the container to use a service on the host system. Overlay Network : Overlay networks connect multiple Docker daemons together, enabling swarm services to communicate with each other. It's used in Docker Swarm mode for multi-host networking. docker network create -d overlay my_overlay_networkUseful for distributed applications that span multiple hosts in a Docker Swarm. Macvlan Network : Macvlan networks allow you to assign a MAC address to a container, making it appear as a physical device on the network. The container can communicate directly with the physical network using its own IP address. docker network create -d macvlan --subnet=192.168.1.0/24 --gateway=192.168.1.1 -o parent=eth0 my_macvlan_networkUseful when you need containers to appear as physical devices on the network and need full control over the network configuration. IPvlan Network: Similar to Macvlan, but uses different methods to route packets. It's more lightweight and provides better performance by leveraging the Linux kernel's built-in network functionalities. docker network create -d ipvlan --subnet=192.168.1.0/24 --gateway=192.168.1.1 -o parent=eth0 my_ipvlan_networkUseful for scenarios where you need low-latency, high-throughput networking with minimal overhead. Explain the main components of Docker architecture Docker consists of the Docker Host, Docker Daemon, Docker Client, and Docker Registry. The Docker Host is the computer (or server) where Docker is installed and running. It's like the home for Docker containers, where they live and run.The Docker Daemon is a background service that manages Docker containers on the Docker Host. It's like the manager of the Docker Host, responsible for creating, running, and monitoring containers based on instructions it receives.The Docker Client communicates with the Docker Daemon, which manages containers.The Docker Registry stores and distributes Docker images.How does a Docker container differ from an image? A Docker image is a static, read-only blueprint, while a container is a running instance of that image. Containers are dynamic and can be modified or deleted without affecting the original image. Explain the purpose of a Dockerfile. Dockerfile is a script containing instructions to build a Docker image. It specifies the base image, sets up the environment, installs dependencies, and defines how the application should run. How do you link containers in Docker? Docker provides network options to enable communication between containers. Docker Compose can also be used to define and manage multi-container applications. How can you secure a Docker container? Container security involves using official base images, minimizing the number of running processes, implementing least privilege principles, regularly updating images, and utilizing Docker Security Scanning tools. ex. Docker vulnerability scanning. Difference between ARG & ENV? ARG is for build-time variables, and its scope is limited to the build process.ENV is for environment variables, and its scope extends to both the build process and the running container.Difference between RUN, ENTRYPOINT & CMD? RUN : Executes a command during the image build process, creating a new image layer.ENTRYPOINT : Defines a fixed command that always runs when the container starts. Note : using --entrypoint we can overridden at runtime.CMD : Specifies a default command or arguments that can be overridden at runtime.Difference between COPY & ADD? If you are just copying local files, it's often better to use COPY for simplicity.Use ADD when you need additional features like extracting compressed archives or pulling resources from URLs.How do you drop the MAC_ADMIN capability when running a Docker container? Use the --cap-drop flag with the docker run command: docker run --cap-drop MAC_ADMIN ubuntuHow do you add the NET_BIND_SERVICE capability when running a Docker container? Use the --cap-drop flag with the docker run command: docker run --cap-add NET_BIND_SERVICE ubuntuHow do you run a Docker container with all privileges enabled? Use the --privileged flag with the docker run command: docker run --privileged ubuntu       This post is for subscribers only Subscribe now Already have an account? Sign in
  10. The Two Faces of Tomorrow /* Scoped styles for the book post */ #book-post { padding: 20px; } #book-post .post-container { padding: 20px; border-radius: 8px; max-width: 800px; box-shadow: 0 2px 5px rgba(0,0,0,0.1); } #book-post .post-header { margin-bottom: 20px; } #book-post .post-header h1 { margin: 0; font-size: 2em; } #book-post .post-header h2 { margin: 0; font-size: 1.2em; } #book-post .book-details { width: 100%; border-collapse: collapse; margin-bottom: 20px; } #book-post .book-details th, #book-post .book-details td { border: 1px solid oklch(0.351 0.0176 260); padding: 8px; text-align: left; } #book-post .book-cover { max-width: 200px; border-radius: 8px; margin: 0 auto; padding-left: 15px; float: right; } #book-post .description { line-height: 1.6; } #book-post .info-link { display: block; margin-top: 20px; } The Two Faces of Tomorrow by James Patrick Hogan Publisher Baen Books Published Date 1997 Page Count 464 Categories Fiction / Science Fiction / General, Fiction / Science Fiction / Hard Science Fiction Language EN Average Rating 2.5 (based on 2 ratings) Maturity Rating No Mature Content Detected ISBN 0671878484 By the mid-21st Century, technology had become much too complicated for humans to handle -- and the computer network that had grown up to keep civilization from tripping over its own shoelaces was also beginning to be overwhelmed. Something Had To Be Done.As a solution, Raymond Dyer's project developed the first genuinely self-aware artificial intelligence -- code name: Spartacus. But could Spartacus be trusted to obey its makers? And if it went rogue, could it be shut down? As an acid test, Spartacus was put in charge of a space station and programmed with a survival instinct. Dyer and his team had the job of seeing how far the computer would go to defend itself when they tried to pull the plug. Dyer didn't expect any serious problems to arise in the experiment.Unfortunately, he had built more initiative into Spartacus than he realized....And a superintelligent computer with a high dose of initiative makes a dangerous guinea pig. More Information
  11. by: Abhishek Kumar Sat, 05 Apr 2025 06:40:23 GMT There was a time when coding meant painstakingly writing every line, debugging cryptic errors at 3 AM, and pretending to understand regex. But in 2025? Coding has evolved, or rather, it has vibed into something entirely new. Enter Vibe Coding, a phenomenon where instead of manually structuring functions and loops, you simply tell AI what you want, and it does the hard work for you. This approach has taken over modern software development. Tools like Cursor and Windsurf, AI-powered code editors built specifically for this new workflow, are helping developers create entire applications without in-depth coding knowledge. Gone are the days of memorizing syntax. Now, you can describe an app idea in plain English, and AI will generate, debug, and even refactor the code for you. At first, it sounded too good to be true. But then people started launching SaaS businesses with nothing but Vibe Coding, using AI to write everything from landing pages to backend logic. We thought, since the future of coding is AI-assisted, you’ll need the right tools to make the most of it. So, here’s a handpicked list of the best code editors for vibe coding in 2025, designed to help you turn your wildest ideas into real projects, fast. 💨 🚧NON-FOSS Warning: Not all the editors mentioned in this article are open source. While some are, many of the AI-powered features provided by these tools rely on cloud services that often include a free tier, but are not entirely free to use. AI compute isn't cheap! When local LLM support is available, I've made sure to mention it specifically. Always check the official documentation or pricing page before diving in.1. ZedIf VS Code feels sluggish and Cursor is a bit too heavy on the vibes, then Zed might just be your new favorite playground. Written entirely in Rust, Zed is built for blazing fast speed. It’s designed to utilize multiple CPU cores and your GPU, making every scroll, search, and keystroke snappy as heck. And while it's still a relatively new player in the editor world, the Zed team is laser-focused on building the fastest, most seamless AI-native code editor out there. You get full AI interaction built right into the editor, thanks to the Assistant Panel and inline assistants that let you refactor, generate, and edit code using natural language, without leaving your flow. Want to use Claude 3.5, a self-hosted LLM via Ollama, or something else? Zed’s open API lets you plug in what works for you. Key Features:✅ Built entirely in Rust for extreme performance and low latency. ✅ Native AI support with inline edits, slash commands, and fast refactoring. ✅ Assistant Panel for controlling AI interactions and inspecting suggestions. ✅ Plug-and-play LLM support, including Ollama and Claude via API. ✅ Workflow Commands to automate complex tasks across multiple files. ✅ Custom Slash Commands with WebAssembly or JSON for tailored AI workflows. Zed AI2. Flexpilot IDEFlexpilot IDE joins the growing league of open-source, AI-native code editors that prioritize developer control and privacy. Forked from VS Code, it's designed to be fully customizable, letting you bring your own API keys or run local LLMs (like via Ollama) for a more private and cost-effective AI experience. Much like Zed, it takes a developer-first approach: no locked-in services, no mysterious backend calls. Just a clean, modern editor that plays nice with whatever AI setup you prefer. Key Features✅ AI-powered autocomplete with context-aware suggestions ✅ Simultaneously edit multiple files in real-time with AI assistance ✅ Ask code-specific questions in a side panel for instant guidance ✅ Refactor, explain, or improve code directly in your files ✅ Get instant AI help with a keyboard shortcut, no interruptions ✅ Talk to your editor and get code suggestions instantly ✅ Run commands and debug with AI assistance inside your terminal ✅ Reference code elements and editor data precisely ✅ AI-powered renaming of variables, functions, and classes ✅ Generate commit messages and PR descriptions in a click ✅ Track token consumption across AI interactions ✅ Use any LLM: OpenAI, Claude, Mistral, or local Ollama ✅ Compatible with GitHub Copilot and other VSCode extensions Flexpilot3. VS Code with GitHub CopilotWhile GitHub Copilot isn’t a standalone code editor, it’s deeply integrated into Visual Studio Code, which makes sense since Microsoft owns both GitHub and VS Code. As one of the most widely used AI coding assistants, Copilot provides real-time AI-powered code suggestions that adapt to your project’s context. Whether you’re writing Python scripts, JavaScript functions, or even Go routines, Copilot speeds up development by generating entire functions, automating repetitive tasks, and even debugging your code. Key Features:✅ AI-driven code suggestions in real-time. ✅ Supports multiple languages, including Python, JavaScript, and Go. ✅ Seamless integration with VS Code, Neovim, and JetBrains IDEs. ✅ Free for students and open-source developers. GitHub Copilot4. Pear AIPear AI is a fork of VSCode, built with AI-first development in mind. It’s kinda like Cursor or Windsurf, but with a twist, you can plug in your own AI server, run local models via Ollama (which is probably the easiest route), or just use theirs. It has autocomplete, context-aware chat, and a few other handy features. Now, full transparency, it's still a bit rough around the edges. Not as polished, a bit slow at times, and the updates? Eh, not super frequent. The setup can feel a little over-engineered if you’re just trying to get rolling. But… I see potential here. If the right devs get their hands on it, this could shape up into something big. Key Features✅ VSCode-based editor with a clean UI and familiar feel ✅ "Knows your code" – context-aware chat that actually understands your project ✅ Works with remote APIs or local LLMs (Ollama integration is the easiest) ✅ Built-in AI code generation tools curated into a neat catalog ✅ Autocomplete and inline code suggestions, powered by your model of choice ✅ Ideal for devs experimenting with custom AI backends or local AI setups Pear AI5. Fleet by JetBrainsIf you've ever written Java, Python, or even Kotlin, chances are you’ve used or at least heard of JetBrains IDEs like IntelliJ IDEA, PyCharm, or WebStorm. JetBrains has long been the gold standard for feature-rich developer environments. Now, they're stepping into the future of coding with Fleet, a modern, lightweight, and AI-powered code editor designed to simplify your workflow while keeping JetBrains' signature intelligence baked in. Fleet isn’t trying to replace IntelliJ, it’s carving a space of its own: minimal UI, fast startup, real-time collaboration, and enough built-in tools to support full-stack projects out of the box. And with JetBrains’ new AI assistant baked in, you're getting contextual help, code generation, and terminal chat, all without leaving your editor. Key Features✅ Designed for fast startup and low memory usage without sacrificing features ✅ Full-Stack Language Support- Java, Kotlin, JavaScript, TypeScript, Python, Go, and more ✅ Real-Time Collaboration. ✅ Integrated Git Tools like Diff viewer, branch management, and seamless commits ✅ Use individual or shared terminals in collaborative sessions ✅ Auto-generate code, fix bugs, or chat with your terminal ✅ Docker & Kubernetes Support - Manage containers right inside your IDE ✅ Preview, format, and edit Markdown files with live previews ✅ Custom themes, keymaps, and future language/tech support via plugins Fleet6. CursorCursor is a heavily modified fork of VSCode with deep AI integration. It supports multi-file editing, inline chat, autocomplete for code, markdown, and even JSON. It’s fast, responsive, and great for quickly shipping out tutorials or apps. You also get terminal autocompletion and contextual AI interactions right in your editor. Key Features✅ Auto-imports and suggestions optimized for TypeScript and Python ✅ Generate entire app components or structures with a single command ✅ Context-gathering assistant that can interact with your terminal ✅ Drag & drop folders for AI-powered explanations and refactoring ✅ Process natural language commands inside the terminal ✅ AI detects issues in your code and suggests fixes ✅ Choose from GPT-4o, Claude 3.5 Sonnet, o1, and more Cursor7. Windsurf (Previously Codeium)Windsurf takes things further with an agentic approach, it can autonomously run scripts, check outputs, and continue building based on the results until it fulfills your request. Though it’s relatively new, Windsurf shows massive promise with smooth performance and smart automation packed into a familiar development interface. Built on (you guessed it) VS Code, Windsurf is crafted by Codeium and introduces features like Supercomplete and Cascade, focusing on deep workspace understanding and intelligent, real-time code generation. Key Features✅ SuperComplete for context-aware, full-block code suggestions across your entire project ✅ Real-time chat assistant for debugging, refactoring, and coding help across languages ✅ Command Palette with custom commands. ✅ Cascade feature for syncing project context and iterative problem-solving ✅ Flow tech for automatic workspace updates and intelligent context awareness ✅ Supports top-tier models like GPT-4o, Claude 3.5 Sonnet, LLaMA 3.1 70B & 405B It’s still new but shows a lot of promise with smooth performance and advanced automation capabilities baked right in. Windsurf AIFinal thoughtsI’ve personally used GitHub Copilot’s free tier quite a bit, and recently gave Zed AI a spin and I totally get why the internet is buzzing with excitement. There’s something oddly satisfying about typing a few lines of instruction and then just... letting your editor take over while you lean back. That said, I’ve also spent hours untangling some hilariously off-mark Copilot-generated bugs. So yeah, it’s powerful, but far from perfect. If you’re just stepping into the AI coding world, don’t dive in blind. Take time to learn the basics, experiment with different editors and assistants, and figure out which one actually helps you ship code your way. And if you're already using an AI editor you swear by, let us know in the comments. Always curious to hear what other devs are using.
  12. Aws /* Scoped styles for the book post */ #book-post { padding: 20px; } #book-post .post-container { padding: 20px; border-radius: 8px; max-width: 800px; box-shadow: 0 2px 5px rgba(0,0,0,0.1); } #book-post .post-header { margin-bottom: 20px; } #book-post .post-header h1 { margin: 0; font-size: 2em; } #book-post .post-header h2 { margin: 0; font-size: 1.2em; } #book-post .book-details { width: 100%; border-collapse: collapse; margin-bottom: 20px; } #book-post .book-details th, #book-post .book-details td { border: 1px solid oklch(0.351 0.0176 260); padding: 8px; text-align: left; } #book-post .book-cover { max-width: 200px; border-radius: 8px; margin: 0 auto; padding-left: 15px; float: right; } #book-post .description { line-height: 1.6; } #book-post .info-link { display: block; margin-top: 20px; } Aws The Ultimate Guide From Beginners To Advanced For The Amazon Web Services (2020 Edition) by Theo H. King Publisher Independently Published Published Date 2019-12-21 Page Count 197 Categories Computers / Internet / Web Services & APIs Language EN Average Rating N/A (based on N/A ratings) Maturity Rating No Mature Content Detected ISBN 1675528276 Become an Expert at Amazon Web Services and Transform Your Business! If cloud computing is one of the leading trends in the IT industry (and it most certainly is), then Amazon Web Services Platform (AWS) is the champion of that trend. If you want to be a part of the competitive markets, you need to jump on this ascending wagon and get familiar with the AWS. There's a reason successful businesses like Netflix and Pinterest use this platform. The math is simple: higher performance, security, and reliability at a lower cost. This book offers a guide to AWS, for both beginner and advanced users. If you want to reduce your companies operating costs, and control the safety of your data, use this step-by-step guide for computing and networking in the AWS cloud. What you'll be able to do after reading this guide: Use developing tools of AWS to your company's advantage Manage cloud architecture through AWS services Upgrade your outsourcing Create a private network in the cloud Implement AWS technology in your projects Create cloud storage and virtual desktop environment Use Amazon Workspaces and Amazon S3 service And so much more! The best part about AWS is that it works on any scale. You can be the owner of both big and small businesses in order to implement AWS in your operations. Even if you're familiar with the AWS cloud, this guide will help you expand your knowledge on the topic. You'll find out everything there is on AWS strategies, cloud selection, and how to make money with a smart AWS implementation in your company. You don't need to be an IT expert to use AWS. You simply need this comprehensive and easy to understand guide. Join millions of customers around the world and skyrocket your profits! Scroll up, click on 'Buy Now with 1-Click' and Get Your Copy! More Information
  13. Numerical Methods for Engineers and Scientists Using MATLAB /* Scoped styles for the book post */ #book-post { padding: 20px; } #book-post .post-container { padding: 20px; border-radius: 8px; max-width: 800px; box-shadow: 0 2px 5px rgba(0,0,0,0.1); } #book-post .post-header { margin-bottom: 20px; } #book-post .post-header h1 { margin: 0; font-size: 2em; } #book-post .post-header h2 { margin: 0; font-size: 1.2em; } #book-post .book-details { width: 100%; border-collapse: collapse; margin-bottom: 20px; } #book-post .book-details th, #book-post .book-details td { border: 1px solid oklch(0.351 0.0176 260); padding: 8px; text-align: left; } #book-post .book-cover { max-width: 200px; border-radius: 8px; margin: 0 auto; padding-left: 15px; float: right; } #book-post .description { line-height: 1.6; } #book-post .info-link { display: block; margin-top: 20px; } Numerical Methods for Engineers and Scientists Using MATLAB by Ramin S. Esfandiari Publisher CRC Press Published Date 2017 Page Count 471 Categories Computers / General, Mathematics / General, Mathematics / Applied, Mathematics / Number Systems, Mathematics / Numerical Analysis, Technology & Engineering / Engineering (General), Technology & Engineering / Civil / General, Technology & Engineering / Mechanical Language EN Average Rating N/A (based on N/A ratings) Maturity Rating No Mature Content Detected ISBN 1498777422 This book provides a pragmatic, methodical and easy-to-follow presentation of numerical methods and their effective implementation using MATLAB, which is introduced at the outset. The author introduces techniques for solving equations of a single variable and systems of equations, followed by curve fitting and interpolation of data. The book also provides detailed coverage of numerical differentiation and integration, as well as numerical solutions of initial-value and boundary-value problems. The author then presents the numerical solution of the matrix eigenvalue problem, which entails approximation of a few or all eigenvalues of a matrix. The last chapter is devoted to numerical solutions of partial differential equations that arise in engineering and science. Each method is accompanied by at least one fully worked-out example showing essential details involved in preliminary hand calculations, as well as computations in MATLAB. This thoroughly-researched resource: More Information
  14. Ecological Statistics /* Scoped styles for the book post */ #book-post { padding: 20px; } #book-post .post-container { padding: 20px; border-radius: 8px; max-width: 800px; box-shadow: 0 2px 5px rgba(0,0,0,0.1); } #book-post .post-header { margin-bottom: 20px; } #book-post .post-header h1 { margin: 0; font-size: 2em; } #book-post .post-header h2 { margin: 0; font-size: 1.2em; } #book-post .book-details { width: 100%; border-collapse: collapse; margin-bottom: 20px; } #book-post .book-details th, #book-post .book-details td { border: 1px solid oklch(0.351 0.0176 260); padding: 8px; text-align: left; } #book-post .book-cover { max-width: 200px; border-radius: 8px; margin: 0 auto; padding-left: 15px; float: right; } #book-post .description { line-height: 1.6; } #book-post .info-link { display: block; margin-top: 20px; } Ecological Statistics Contemporary Theory and Application by Gordon A. Fox, Simoneta Negrete-Yankelevich, Vinicio J. Sosa Publisher Oxford University Press Published Date 2015 Page Count 389 Categories Computers / Mathematical & Statistical Software, Mathematics / Probability & Statistics / General, Nature / General, Science / Life Sciences / Botany, Science / Life Sciences / Ecology, Science / Life Sciences / Zoology / General Language EN Average Rating N/A (based on N/A ratings) Maturity Rating No Mature Content Detected ISBN 0199672555 The application and interpretation of statistics are central to ecological study and practice. Ecologists are now asking more sophisticated questions than in the past. These new questions, together with the continued growth of computing power and the availability of new software, have created a new generation of statistical techniques. These have resulted in major recent developments in both our understanding and practice of ecological statistics. This novel book synthesizes a number of these changes, addressing key approaches and issues that tend to be overlooked in other books such as missing/censored data, correlation structure of data, heterogeneous data, and complex causal relationships. These issues characterize a large proportion of ecological data, but most ecologists' training in traditional statistics simply does not provide them with adequate preparation to handle the associated challenges. Uniquely, Ecological Statistics highlights the underlying links among many statistical approaches that attempt to tackle these issues. In particular, it gives readers an introduction to approaches to inference, likelihoods, generalized linear (mixed) models, spatially or phylogenetically-structured data, and data synthesis, with a strong emphasis on conceptual understanding and subsequent application to data analysis. Written by a team of practicing ecologists, mathematical explanations have been kept to the minimum necessary. This user-friendly textbook will be suitable for graduate students, researchers, and practitioners in the fields of ecology, evolution, environmental studies, and computational biology who are interested in updating their statistical tool kits. A companion web site provides example data sets and commented code in the R language. More Information
  15. Design Patterns /* Scoped styles for the book post */ #book-post { padding: 20px; } #book-post .post-container { padding: 20px; border-radius: 8px; max-width: 800px; box-shadow: 0 2px 5px rgba(0,0,0,0.1); } #book-post .post-header { margin-bottom: 20px; } #book-post .post-header h1 { margin: 0; font-size: 2em; } #book-post .post-header h2 { margin: 0; font-size: 1.2em; } #book-post .book-details { width: 100%; border-collapse: collapse; margin-bottom: 20px; } #book-post .book-details th, #book-post .book-details td { border: 1px solid oklch(0.351 0.0176 260); padding: 8px; text-align: left; } #book-post .book-cover { max-width: 200px; border-radius: 8px; margin: 0 auto; padding-left: 15px; float: right; } #book-post .description { line-height: 1.6; } #book-post .info-link { display: block; margin-top: 20px; } Design Patterns Elements of Reusable Object-oriented Software by Erich Gamma, Richard Helm (Computer scientist), Ralph E. Johnson, John Vlissides Publisher Addison-Wesley Published Date 1995 Page Count 395 Categories Computers / Programming / Object Oriented Language EN Average Rating N/A (based on N/A ratings) Maturity Rating No Mature Content Detected ISBN 9332555400 Four software designers present a catalog of simple and succinct solutions to commonly occurring design problems, using Smalltalk and C++ in example code. These 23 patterns allow designers to create more flexible, elegant, and ultimately reusable designs without having to rediscover the design solutions themselves. The authors begin by describing what patterns are and how they can help you design object-oriented software. They go on to systematically name, explain, evaluate, and catalog recurring designs in object-oriented systems.--From publisher description. More Information
  16. The Model Thinker /* Scoped styles for the book post */ #book-post { padding: 20px; } #book-post .post-container { padding: 20px; border-radius: 8px; max-width: 800px; box-shadow: 0 2px 5px rgba(0,0,0,0.1); } #book-post .post-header { margin-bottom: 20px; } #book-post .post-header h1 { margin: 0; font-size: 2em; } #book-post .post-header h2 { margin: 0; font-size: 1.2em; } #book-post .book-details { width: 100%; border-collapse: collapse; margin-bottom: 20px; } #book-post .book-details th, #book-post .book-details td { border: 1px solid oklch(0.351 0.0176 260); padding: 8px; text-align: left; } #book-post .book-cover { max-width: 200px; border-radius: 8px; margin: 0 auto; padding-left: 15px; float: right; } #book-post .description { line-height: 1.6; } #book-post .info-link { display: block; margin-top: 20px; } The Model Thinker What You Need to Know to Make Data Work for You by Scott E. Page Publisher Basic Books Published Date 2018-11-27 Page Count 448 Categories Computers / Data Science / Data Modeling & Design, Mathematics / Probability & Statistics / General, Social Science / Statistics, Business & Economics / Economics / Theory, Computers / Data Science / General Language EN Average Rating N/A (based on N/A ratings) Maturity Rating No Mature Content Detected ISBN 0465094635 Work with data like a pro using this guide that breaks down how to organize, apply, and most importantly, understand what you are analyzing in order to become a true data ninja. From the stock market to genomics laboratories, census figures to marketing email blasts, we are awash with data. But as anyone who has ever opened up a spreadsheet packed with seemingly infinite lines of data knows, numbers aren't enough: we need to know how to make those numbers talk. In The Model Thinker, social scientist Scott E. Page shows us the mathematical, statistical, and computational models—from linear regression to random walks and far beyond—that can turn anyone into a genius. At the core of the book is Page's "many-model paradigm," which shows the reader how to apply multiple models to organize the data, leading to wiser choices, more accurate predictions, and more robust designs. The Model Thinker provides a toolkit for business people, students, scientists, pollsters, and bloggers to make them better, clearer thinkers, able to leverage data and information to their advantage. More Information
  17. Foundations of Applied Mathematics, Volume 2 /* Scoped styles for the book post */ #book-post { padding: 20px; } #book-post .post-container { padding: 20px; border-radius: 8px; max-width: 800px; box-shadow: 0 2px 5px rgba(0,0,0,0.1); } #book-post .post-header { margin-bottom: 20px; } #book-post .post-header h1 { margin: 0; font-size: 2em; } #book-post .post-header h2 { margin: 0; font-size: 1.2em; } #book-post .book-details { width: 100%; border-collapse: collapse; margin-bottom: 20px; } #book-post .book-details th, #book-post .book-details td { border: 1px solid oklch(0.351 0.0176 260); padding: 8px; text-align: left; } #book-post .book-cover { max-width: 200px; border-radius: 8px; margin: 0 auto; padding-left: 15px; float: right; } #book-post .description { line-height: 1.6; } #book-post .info-link { display: block; margin-top: 20px; } Foundations of Applied Mathematics, Volume 2 Algorithms, Approximation, Optimization by Jeffrey Humpherys, Tyler J. Jarvis Publisher SIAM Published Date 2020-03-10 Page Count 806 Categories Mathematics / Numerical Analysis, Mathematics / Optimization, Mathematics / Probability & Statistics / General, Computers / Computer Science, Computers / Programming / Algorithms Language EN Average Rating N/A (based on N/A ratings) Maturity Rating No Mature Content Detected ISBN 1611976065 In this second book of what will be a four-volume series, the authors present, in a mathematically rigorous way, the essential foundations of both the theory and practice of algorithms, approximation, and optimization—essential topics in modern applied and computational mathematics. This material is the introductory framework upon which algorithm analysis, optimization, probability, statistics, machine learning, and control theory are built. This text gives a unified treatment of several topics that do not usually appear together: the theory and analysis of algorithms for mathematicians and data science students; probability and its applications; the theory and applications of approximation, including Fourier series, wavelets, and polynomial approximation; and the theory and practice of optimization, including dynamic optimization. When used in concert with the free supplemental lab materials, Foundations of Applied Mathematics, Volume 2: Algorithms, Approximation, Optimization teaches not only the theory but also the computational practice of modern mathematical methods. Exercises and examples build upon each other in a way that continually reinforces previous ideas, allowing students to retain learned concepts while achieving a greater depth. The mathematically rigorous lab content guides students to technical proficiency and answers the age-old question “When am I going to use this?” This textbook is geared toward advanced undergraduate and beginning graduate students in mathematics, data science, and machine learning. More Information
  18. by: Juan Diego Rodríguez Fri, 04 Apr 2025 13:05:22 +0000 The beauty of research is finding yourself on a completely unrelated topic mere minutes from opening your browser. It happened to me while writing an Almanac entry on @namespace, an at-rule that we probably won’t ever use and is often regarded as a legacy piece of CSS. Maybe that’s why there wasn’t a lot of info about it until I found a 2010s post on @namespace by Divya Manian. The post was incredibly enlightening, but that’s beside the point; what’s important is that in Divya’s blog, there were arrows on the sides to read the previous and next posts: Don’t ask me why, but without noticing, I somehow clicked the left arrow twice, which led me to a post on “Notes from HTML5 Readiness Hacking.” What’s HTML 5 Readiness?! HTML 5 Readiness was a site created by Paul Irish and Divya Manian that showed the browser support for several web features through the lens of a rainbow of colors. The features were considered (at the time) state-of-the-art or bleeding-edge stuff, such as media queries, transitions, video and audio tags, etc. As each browser supported a feature, a section of the rainbow would be added. I think it worked from 2010 to 2013, although it showed browser support data from 2008. I can’t describe how nostalgic it made me feel; it reminded me of simpler times when even SVGs weren’t fully supported. What almost made me shed a tear was thinking that, if this tool was updated today, all of the features would be colored in a full rainbow. A new web readiness It got me thinking: there are so many new features coming to CSS (many that haven’t shipped to any browser) that there could be a new HTML5 Readiness with all of them. That’s why I set myself to do exactly that last weekend, a Web Readiness 2025 that holds each of the features coming to HTML and CSS I am most excited about. You can visit it at webreadiness.com! Right now, it looks kinda empty, but as time goes we will hopefully see how the rainbow grows: Even though it was a weekend project, I took the opportunity to dip my toes into a couple of things I wanted to learn. Below are also some snippets I think are worth sharing. The data is sourced from Baseline My first thought was to mod the <baseline-status> web component made by the Chrome team because I have been wanting to use it since it came out. In short, it lets you embed the support data for a web feature directly into your blog. Not long ago, in fact, Geoff added it as a WordPress block in CSS-Tricks, which has been super useful while writing the Almanac: However, I immediately realized that using the <baseline-status> would be needlessly laborious, so I instead pulled the data from the Web Features API — https://api.webstatus.dev/v1/features/ — and displayed it myself. You can find all the available features in the GitHub repo. Each ray is a web component Another feature I have been wanting to learn more about was Web Components, and since Geoff recently published his notes on Scott Jehl’s course Web Components Demystified, I thought it was the perfect chance. In this case, each ray would be a web component with a simple live cycle: Get instantiated. Read the feature ID from a data-feature attribute. Fetch its data from the Web Features API. Display its support as a list. Said and done! The simplified version of that code looks something like the following: class BaselineRay extends HTMLElement { constructor() { super(); } static get observedAttributes() { return ["data-feature"]; } attributeChangedCallback(property, oldValue, newValue) { if (oldValue !== newValue) { this[property] = newValue; } } async #fetchFeature(endpoint, featureID) { // Fetch Feature Function } async connectedCallback() { // Call fetchFeature and Output List } } customElements.define("baseline-ray", BaselineRay); Animations with the Web Animation API I must admit, I am not too design-savvy (I hope it isn’t that obvious), so what I lacked in design, I made up with some animations. When the page initially loads, a welcome animation is easily achieved with a couple of timed keyframes. However, the animation between the rainbow and list layouts is a little more involved since it depends on the user’s input, so we have to trigger them with JavaScript. At first, I thought it would be easier to do them with Same-Document View Transitions, but I found myself battling with the browser’s default transitions and the lack of good documentation beyond Chrome’s posts. That’s why I decided on the Web Animation API, which lets you trigger transitions in a declarative manner. sibling-index() and sibling-count() A while ago, I wrote about the sibling-index() and sibling-count() functions. As their names imply, they return the current index of an element among its sibling, and the total amount of siblings, respectively. While Chrome announced its intent to ship both functions, I know it will be a while until they reach baseline support, but I still needed them to rotate and move each ray. In that same post, I talked about three options to polyfill each function. The first two were CSS-only, but this time I took the simplest JavaScript way which observes the number of rays and adds custom properties with its index and total count. Sure, it’s a bit overkill since the amount of rays doesn’t change, but pretty easy to implement: const elements = document.querySelector(".rays"); const updateCustomProperties = () => { let index = 0; for (let element of elements.children) { element.style.setProperty("--sibling-index", index); index++; } elements.style.setProperty("--sibling-count", elements.children.length - 1); }; updateCustomProperties(); const observer = new MutationObserver(updateCustomProperties); const config = {attributes: false, childList: true, subtree: false}; observer.observe(elements, config); With this, I could position each ray in a 180-degree range: baseline-ray ul{ --position: calc(180 / var(--sibling-count) * var(--sibling-index) - 90); --rotation: calc(var(--position) * 1deg); transform: translateX(-50%) rotate(var(--rotation)) translateY(var(--ray-separation)); transform-origin: bottom center; } The selection is JavaScript-less In the browser captions, if you hover over a specific browser, that browser’s color will pop out more in the rainbow while the rest becomes a little transparent. Since in my HTML, the caption element isn’t anyway near the rainbow (as a parent or a sibling), I thought I would need JavaScript for the task, but then I remembered I could simply use the :has() selector. It works by detecting whenever the closest parent of both elements (it could be <section>, <main>, or the whole <body>) has a .caption item with a :hover pseudo-class. Once detected, we increase the size of each ray section of the same browser, while decreasing the opacity of the rest of the ray sections. CodePen Embed Fallback What’s next?! What’s left now is to wait! I hope people can visit the page from time to time and see how the rainbow grows. Like the original HTML 5 Readiness page, I also want to take a snapshot at the end of the year to see how it looks until each feature is fully supported. Hopefully, it won’t take long, especially seeing the browser’s effort to ship things faster and improve interoperability. Also, let me know if you think a feature is missing! I tried my best to pick exciting features without baseline support. View the report A New “Web” Readiness Report originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  19. by: Abhishek Prakash Thu, 03 Apr 2025 04:28:54 GMT Linux distributions agreeing to a single universal packaging system? That sounds like a joke, right? That's because it is. It's been a tradition of sort to prank readers on 1st of April with a humorous article. Since we are already past the 1st April in all time zones, let me share this year's April Fool article with you. I hope you find it as amusing as I did while writing it 😄 No Snap or FlatPak! Linux Distros Agreed to Have Only One Universal PackagingIs this the end of fragmentation for Linux?It's FOSS NewsAbhishek💬 Let's see what else you get in this edition Vivaldi offering free built-in VPN.Tools to enhance AppImage experience.Serpent OS going through a rebranding.And other Linux news, tips, and, of course, memes!This edition of FOSS Weekly is supported by Typesense.❇️ Typesense: Open Source Search EngineTypesense is the free, open-source search engine for forward-looking devs. Make it easy on people: Tpyos? Typesense knows we mean typos, and they happen. With ML-powered typo tolerance and semantic search, Typesense helps your customers find what they’re looking for—fast. Check them out on GitHub. GitHub - typesense/typesense: Open Source alternative to Algolia + Pinecone and an Easier-to-Use alternative to ElasticSearch ⚡ 🔍 ✨ Fast, typo tolerant, in-memory fuzzy Search Engine for building delightful search experiencesOpen Source alternative to Algolia + Pinecone and an Easier-to-Use alternative to ElasticSearch ⚡ 🔍 ✨ Fast, typo tolerant, in-memory fuzzy Search Engine for building delightful search experiences -…GitHubtypesense📰 Linux and Open Source NewsVivaldi has teamed up with Proton VPN to provide an in-browser VPN.Serpent OS is now called AerynOS, and the first release is already here.GoboLinux has had a change in leadership, with a new release coming after a five-year gap.Proton now offers more features in its Drive and Docs app.🧠 What We’re Thinking AboutThank goodness Linux saves us from this 🤷 New Windows 11 build makes mandatory Microsoft Account sign-in even more mandatory“Bypassnro” is an easy MS Account workaround for Home and Pro Windows editions.Ars TechnicaAndrew Cunningham🧮 Linux Tips, Tutorials and MoreMove away from Google Photos and self-host a privacy-focused solution instead.Window managers on Linux allow you to organize your windows and make use of screen space efficiently.Fed up with Netflix streaming in SD quality? You can make it play Full-HD content on Firefox by using a neat trick.Love AppImage? These tools will help you improve your AppImage experience. 5 Tools to Enhance Your AppImage Experience on LinuxLove using AppImages but hate the mess? Check out these handy tools that make it super easy to organize, update, and manage AppImages on your Linux system.It's FOSSSreenath👷 Homelab and Maker's CornerDon't lose knowledge! Self-host your own Wikipedia or Arch Wiki: Taking Knowledge in My Own Hands By Self Hosting Wikipedia and Arch WikiDoomsday or not, knowledge should be preserved.It's FOSSAbhishek Kumar✨ Apps HighlightFind yourself often forgetting things? Then you might need a reminder app like Tasks.org. Ditch Proprietary Reminder Apps, Try Tasks.org InsteadStay organized with Tasks.org, an open source to-do and reminders app that doesn’t sell your data.It's FOSS NewsSourav Rudra📽️ Videos I am Creating for YouI tested COSMIC alpha on Fedora 42 beta in the latest video. And I have taken some of the feedback to improve the audio quality in this one. Subscribe to It's FOSS YouTube Channel🧩 Quiz TimeCan you solve this riddle? Riddler’s Back: Open-Source App QuizGuess the open-source applications following the riddles.It's FOSSAnkush DasAfter you are done with that, you can try your hand at matching Linux apps with their roles. 💡 Quick Handy TipIn KDE Plasma, you can edit copied texts in the Clipboard. First, launch the clipboard using the shortcut CTRL+V. Now, click on the Edit button, which looks like a pencil. Then, edit the contents and click on Save to store it as a new clipboard item. 🤣 Meme of the WeekSuch a nice vanity plate. 😮 🗓️ Tech TriviaOn March 31, 1939, Harvard and IBM signed an agreement to build the Mark I, also known as the IBM Automatic Sequence Controlled Calculator (ASCC). This pioneering electromechanical computer, conceived by Howard Aiken, interpreted instructions from paper tape and data from punch cards, playing a significant role in World War II calculations. 🧑‍🤝‍🧑 FOSSverse CornerFOSSers are discussing which is the most underrated Linux distribution out there. Care to share your views? What is the most underrated Linux distribution?There are some distros like Debian, Ubuntu and Mint that are commonly used and everyone knows how good they are. but There are others that are used only by a few people and perform equally as well. Would you like to nominate your choice for the most underrated Linux distro? I will nominate Void Linux… it is No 93 on distrowatch and performs for me as well as MX Linux or Debian.It's FOSS Communitynevj❤️ With loveShare it with your Linux-using friends and encourage them to subscribe (hint: it's here). Share the articles in Linux Subreddits and community forums. Follow us on Google News and stay updated in your News feed. Opt for It's FOSS Plus membership and support us 🙏 Enjoy FOSS 😄
  20. Earlier
  21. Blogger posted a blog entry in Programmer's Corner
    by: Geoff Graham Wed, 02 Apr 2025 12:37:19 +0000 I was chatting with Andy Clarke the other day about a new article he wants to write about SVG animations. “I’ve read some things that said that SMIL might be a dead end.” He said. “Whaddya think?” That was my impression, too. Sarah Drasner summed up the situation nicely way back in 2017: Chrome was also in on the party and published an intent to deprecate SMIL, citing work in other browsers to support SVG animations in CSS. MDN linked to that same thread in its SMIL documentation when it published a deprecation warning. Well, Chrome never deprecated SMIL. At least according to this reply in the thread dated 2023. And since then, we’ve also seen Microsoft’s Edge adopt a Chromium engine, effectively making it a Chrome clone. Also, last I checked, Caniuse reports full support in WebKit browsers. This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up. DesktopChromeFirefoxIEEdgeSafari5411796Mobile / TabletAndroid ChromeAndroid FirefoxAndroidiOS Safari13413636.0-6.1 Now, I’m not saying that SMIL is perfectly alive and well. It could still very well be in the doldrums, especially when there are robust alternatives in CSS and JavaScript. But it’s also not dead in the water. SMIL on? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  22. by: Sreenath Wed, 02 Apr 2025 10:50:07 GMT The portable AppImage format is quite popular among developers and users alike. It allows you to run applications without installation or dependency issues, on virtually any Linux distribution. However, managing multiple AppImages or keeping them updated can sometimes be a bit cumbersome. Fortunately, there are third-party tools that simplify the process, making it easier to organize, update, and integrate AppImages into your Linux system. In this article, I’ll share some useful tools that can help you manage AppImages more effectively and enhance your overall experience. Gear LeverGear Lever is a modern GTK-based application that lets you manage your local AppImage files. It primarily helps you organize AppImages by adding desktop entries, updating applications, and more. Installed AppImages in Gear LeverFeatures of Gear LeverDrag and drop files directly from your file managerUpdate apps in placeKeep multiple versions installedInstall Gear LeverGear Lever is available as a Flatpak package. You can install it with the following command: flatpak install flathub it.mijorus.gearleverGear LeverAppImage Launcher📋While the last release of AppImage Launcher was a few years ago, it works pretty fine.If you're a frequent user of AppImage packages, you should definitely check out AppImage Launcher. This open-source tool helps integrate AppImages into your system. It allows users to quickly add AppImages to the application menu, manage updates, and remove them with just a few clicks. AppImage LauncherFeatures of AppImage LauncherAdds desktop integration to AppImage filesIncludes a helper tool to manage AppImage updatesAllows easy removal of AppImagesProvides CLI tools for terminal-based operations and automationInstall AppImage LauncherFor Ubuntu users, the .deb file is available under the Continuous build section on the releases page. AppImage LauncherAppImage Package Manager and AppManAppImage Package Manager (AM) is designed to simplify AppImage management, functioning similarly to how APT or DNF handle native packages. It supports not just AppImages, but other portable formats as well. AM relies on a large database of shell scripts, inspired by the Arch User Repository (AUR), to manage AppImages from various sources. A similar tool is AppMan. It is basically AM but manages all your apps locally without needing root access. If you are a casual user, you can use AppMan instead of AM so that everything will be local and no need for any sudo privileges. AppImage Package Manager (AppMan Version) Features of AppImage Package ManagerSupports AppImages and standalone archives (e.g., Firefox, Blender)Includes a comprehensive shell script database for official and community-sourced AppImagesCreate and restore snapshotsDrag-and-drop AppImage integrationConvert legacy AppImage formatsInstall AppImage Package ManagerTo install, run the following commands: wget -q https://raw.githubusercontent.com/ivan-hc/AM/main/AM-INSTALLER && chmod a+x ./AM-INSTALLER && ./AM-INSTALLERThe installer will prompt you to choose between AM and AppMan. Choose AppMan if you prefer local, privilege-free management. AppImage Package ManagerAppImagePoolAppImagePool is a Flutter-based client for AppImage Hub. It offers a clean interface to browse and download AppImages listed on AppImage Hub. AppImage Pool client home pageFeatures of AppImagePoolCategorized list of AppImagesDownload from GitHub directly, no extra-server involvedIntegrate and Disintegrate AppImages easily from your systemVersion History and multi download supportInstalling AppImage PoolDownload the AppImage file from the official GitHub releases page. Download AppImage PoolThere is a Flatpak package is available to install from Flathub. If your system has Flatpak support, use the command: flatpak install flathub io.github.prateekmedia.appimagepoolZap📋The last release of Zap was a few years ago but it worked fine in my testing.Zap is an AppImage package manager written in Go. It allows you to install, update, and integrate AppImage packages efficiently. 0:00 /0:37 1× Zap AppImage package Manager Features of ZapInstall packages from the AppImage catalog using registered namesSelect and install specific versionsUse the Zap daemon for automatic update checksInstall AppImages from GitHub releasesInstall ZapTo install Zap locally, run: curl https://raw.githubusercontent.com/srevinsaju/zap/main/install.sh | bash -sFor a system-wide installation, run: curl https://raw.githubusercontent.com/srevinsaju/zap/main/install.sh | sudo bash -sZapIn the end...Here are a few more resources that an AppImage lover might like: Bauh package manager: bauh is a graphical interface for managing various Linux package formats like AppImage, Deb, Flatpak, etc.XApp-Thumbnailers: This is a thumbnail generation tool for popular file managers.Awesome AppImage: Lists several AppImage tools and resources.AppImage is a fantastic way to use portable applications on Linux, but managing them manually can be tedious over time. Thankfully, the tools mentioned above make it easier to organize, update, and integrate AppImages into your workflow. From a feature-rich GUI tool like Gear Lever to CLI tools like AppImagePool and AppMan, there’s something here for every kind of user. Try out a few and see which one fits your style best.
  23. How Google Tests Software /* Scoped styles for the book post */ #book-post { padding: 20px; } #book-post .post-container { padding: 20px; border-radius: 8px; max-width: 800px; box-shadow: 0 2px 5px rgba(0,0,0,0.1); } #book-post .post-header { margin-bottom: 20px; } #book-post .post-header h1 { margin: 0; font-size: 2em; } #book-post .post-header h2 { margin: 0; font-size: 1.2em; } #book-post .book-details { width: 100%; border-collapse: collapse; margin-bottom: 20px; } #book-post .book-details th, #book-post .book-details td { border: 1px solid oklch(0.351 0.0176 260); padding: 8px; text-align: left; } #book-post .book-cover { max-width: 200px; border-radius: 8px; margin: 0 auto; padding-left: 15px; float: right; } #book-post .description { line-height: 1.6; } #book-post .info-link { display: block; margin-top: 20px; } How Google Tests Software by James A. Whittaker, Jason Arbon, Jeff Carollo Publisher Addison-Wesley Professional Published Date 2012 Page Count 281 Categories Computers / General, Computers / Software Development & Engineering / Quality Assurance & Testing, Computers / Internet / Search Engines Language EN Average Rating 5 (based on 1 ratings) Maturity Rating No Mature Content Detected ISBN 0321803027 2012 Jolt Award finalist! Pioneering the Future of Software Test Do you need to get it right, too? Then, learn from Google. Legendary testing expert James Whittaker, until recently a Google testing leader, and two top Google experts reveal exactly how Google tests software, offering brand-new best practices you can use even if you're not quite Google's size...yet! Breakthrough Techniques You Can Actually Use Discover 100% practical, amazingly scalable techniques for analyzing risk and planning tests...thinking like real users...implementing exploratory, black box, white box, and acceptance testing...getting usable feedback...tracking issues...choosing and creating tools...testing "Docs & Mocks," interfaces, classes, modules, libraries, binaries, services, and infrastructure...reviewing code and refactoring...using test hooks, presubmit scripts, queues, continuous builds, and more. With these techniques, you can transform testing from a bottleneck into an accelerator-and make your whole organization more productive! More Information
  24. Snowflake Data Engineering /* Scoped styles for the book post */ #book-post { padding: 20px; } #book-post .post-container { padding: 20px; border-radius: 8px; max-width: 800px; box-shadow: 0 2px 5px rgba(0,0,0,0.1); } #book-post .post-header { margin-bottom: 20px; } #book-post .post-header h1 { margin: 0; font-size: 2em; } #book-post .post-header h2 { margin: 0; font-size: 1.2em; } #book-post .book-details { width: 100%; border-collapse: collapse; margin-bottom: 20px; } #book-post .book-details th, #book-post .book-details td { border: 1px solid oklch(0.351 0.0176 260); padding: 8px; text-align: left; } #book-post .book-cover { max-width: 200px; border-radius: 8px; margin: 0 auto; padding-left: 15px; float: right; } #book-post .description { line-height: 1.6; } #book-post .info-link { display: block; margin-top: 20px; } Snowflake Data Engineering by Maja Ferle Publisher Simon and Schuster Published Date 2025-01-28 Page Count 368 Categories Computers / Data Science / General, Computers / Data Science / Data Analytics, Computers / Artificial Intelligence / Expert Systems Language EN Average Rating N/A (based on N/A ratings) Maturity Rating No Mature Content Detected ISBN 1633436853 A practical introduction to data engineering on the powerful Snowflake cloud data platform. Data engineers create the pipelines that ingest raw data, transform it, and funnel it to the analysts and professionals who need it. The Snowflake cloud data platform provides a suite of productivity-focused tools and features that simplify building and maintaining data pipelines. In Snowflake Data Engineering, Snowflake Data Superhero Maja Ferle shows you how to get started. In Snowflake Data Engineering you will learn how to: • Ingest data into Snowflake from both cloud and local file systems • Transform data using functions, stored procedures, and SQL • Orchestrate data pipelines with streams and tasks, and monitor their execution • Use Snowpark to run Python code in your pipelines • Deploy Snowflake objects and code using continuous integration principles • Optimize performance and costs when ingesting data into Snowflake Snowflake Data Engineering reveals how Snowflake makes it easy to work with unstructured data, set up continuous ingestion with Snowpipe, and keep your data safe and secure with best-in-class data governance features. Along the way, you’ll practice the most important data engineering tasks as you work through relevant hands-on examples. Throughout, author Maja Ferle shares design tips drawn from her years of experience to ensure your pipeline follows the best practices of software engineering, security, and data governance. Foreword by Joe Reis. Purchase of the print book includes a free eBook in PDF and ePub formats from Manning Publications. About the technology Pipelines that ingest and transform raw data are the lifeblood of business analytics, and data engineers rely on Snowflake to help them deliver those pipelines efficiently. Snowflake is a full-service cloud-based platform that handles everything from near-infinite storage, fast elastic compute services, inbuilt AI/ML capabilities like vector search, text-to-SQL, code generation, and more. This book gives you what you need to create effective data pipelines on the Snowflake platform. About the book Snowflake Data Engineering guides you skill-by-skill through accomplishing on-the-job data engineering tasks using Snowflake. You’ll start by building your first simple pipeline and then expand it by adding increasingly powerful features, including data governance and security, adding CI/CD into your pipelines, and even augmenting data with generative AI. You’ll be amazed how far you can go in just a few short chapters! What's inside • Ingest data from the cloud, APIs, or Snowflake Marketplace • Orchestrate data pipelines with streams and tasks • Optimize performance and cost About the reader For software developers and data analysts. Readers should know the basics of SQL and the Cloud. About the author Maja Ferle is a Snowflake Subject Matter Expert and a Snowflake Data Superhero who holds the SnowPro Advanced Data Engineer and the SnowPro Advanced Data Analyst certifications. Table of Contents Part 1 1 Data engineering with Snowflake 2 Creating your first data pipeline Part 2 3 Best practices for data staging 4 Transforming data 5 Continuous data ingestion 6 Executing code natively with Snowpark 7 Augmenting data with outputs from large language models 8 Optimizing query performance 9 Controlling costs 10 Data governance and access control Part 3 11 Designing data pipelines 12 Ingesting data incrementally 13 Orchestrating data pipelines 14 Testing for data integrity and completeness 15 Data pipeline continuous integration More Information
  25. Jessica Brown posted a post in a topic in Linux
    Guide to UNIX Using Linux /* Scoped styles for the book post */ #book-post { padding: 20px; } #book-post .post-container { padding: 20px; border-radius: 8px; max-width: 800px; box-shadow: 0 2px 5px rgba(0,0,0,0.1); } #book-post .post-header { margin-bottom: 20px; } #book-post .post-header h1 { margin: 0; font-size: 2em; } #book-post .post-header h2 { margin: 0; font-size: 1.2em; } #book-post .book-details { width: 100%; border-collapse: collapse; margin-bottom: 20px; } #book-post .book-details th, #book-post .book-details td { border: 1px solid oklch(0.351 0.0176 260); padding: 8px; text-align: left; } #book-post .book-cover { max-width: 200px; border-radius: 8px; margin: 0 auto; padding-left: 15px; float: right; } #book-post .description { line-height: 1.6; } #book-post .info-link { display: block; margin-top: 20px; } Guide to UNIX Using Linux by Michael J. Palmer Publisher W. Ross MacDonald School Resource Services Library Published Date 2011 Page Count N/A Categories Linux Language EN Average Rating N/A (based on N/A ratings) Maturity Rating No Mature Content Detected ISBN N/A No description available. More Information
  26. Netnography /* Scoped styles for the book post */ #book-post { padding: 20px; } #book-post .post-container { padding: 20px; border-radius: 8px; max-width: 800px; box-shadow: 0 2px 5px rgba(0,0,0,0.1); } #book-post .post-header { margin-bottom: 20px; } #book-post .post-header h1 { margin: 0; font-size: 2em; } #book-post .post-header h2 { margin: 0; font-size: 1.2em; } #book-post .book-details { width: 100%; border-collapse: collapse; margin-bottom: 20px; } #book-post .book-details th, #book-post .book-details td { border: 1px solid oklch(0.351 0.0176 260); padding: 8px; text-align: left; } #book-post .book-cover { max-width: 200px; border-radius: 8px; margin: 0 auto; padding-left: 15px; float: right; } #book-post .description { line-height: 1.6; } #book-post .info-link { display: block; margin-top: 20px; } Netnography Doing Ethnographic Research Online by Robert V Kozinets Publisher SAGE Publications Published Date 2010 Page Count 221 Categories Social Science / Research, Social Science / Anthropology / Cultural, Computers / Internet / General, Social Science / Methodology Language EN Average Rating N/A (based on N/A ratings) Maturity Rating No Mature Content Detected ISBN 1848606451 This exciting new text is the first to explore the discipline of ‘Netnography’ – the conduct of ethnography over the internet – a method specifically designed to study cultures and communities online. For the first time, full procedural guidelines for the accurate and ethical conduct of ethnographic research online are set out, with detailed, step-by-step guidance to thoroughly introduce, explain, and illustrate the method to students and researchers. The author also surveys the latest research on online cultures and communities, focusing on the methods used to study them, with examples focusing on the new elements and contingencies of the blogosphere (blogging), microblogging, videocasting, podcasting, social networking sites, virtual worlds, and more. More Information
  27. Jessica Brown posted a post in a topic in Computers
    Programming Pearls /* Scoped styles for the book post */ #book-post { padding: 20px; } #book-post .post-container { padding: 20px; border-radius: 8px; max-width: 800px; box-shadow: 0 2px 5px rgba(0,0,0,0.1); } #book-post .post-header { margin-bottom: 20px; } #book-post .post-header h1 { margin: 0; font-size: 2em; } #book-post .post-header h2 { margin: 0; font-size: 1.2em; } #book-post .book-details { width: 100%; border-collapse: collapse; margin-bottom: 20px; } #book-post .book-details th, #book-post .book-details td { border: 1px solid oklch(0.351 0.0176 260); padding: 8px; text-align: left; } #book-post .book-cover { max-width: 200px; border-radius: 8px; margin: 0 auto; padding-left: 15px; float: right; } #book-post .description { line-height: 1.6; } #book-post .info-link { display: block; margin-top: 20px; } Programming Pearls by Jon Louis Bentley Publisher Addison-Wesley Published Date 1986 Page Count 195 Categories Computers / Programming / General Language EN Average Rating N/A (based on N/A ratings) Maturity Rating No Mature Content Detected ISBN 0201103311 The essays in this book present programs that go beyond solid engineering techniques to be creative and clever solutions to computer problems. The programs are fun and teach important programming tecniques and fundamental design principles. More Information
  28. by: Bryan Robinson Tue, 01 Apr 2025 13:50:58 +0000 I’m a big fan of Astro’s focus on developer experience (DX) and the onboarding of new developers. While the basic DX is strong, I can easily make a convoluted system that is hard to onboard my own developers to. I don’t want that to happen. If I have multiple developers working on a project, I want them to know exactly what to expect from every component that they have at their disposal. This goes double for myself in the future when I’ve forgotten how to work with my own system! To do that, a developer could go read each component and get a strong grasp of it before using one, but that feels like the onboarding would be incredibly slow. A better way would be to set up the interface so that as the developer is using the component, they have the right knowledge immediately available. Beyond that, it would bake in some defaults that don’t allow developers to make costly mistakes and alerts them to what those mistakes are before pushing code! Enter, of course, TypeScript. Astro comes with TypeScript set up out of the box. You don’t have to use it, but since it’s there, let’s talk about how to use it to craft a stronger DX for our development teams. Watch I’ve also recorded a video version of this article that you can watch if that’s your jam. Check it out on YouTube for chapters and closed captioning. Setup In this demo, we’re going to use a basic Astro project. To get this started, run the following command in your terminal and choose the “Minimal” template. npm create astro@latest This will create a project with an index route and a very simple “Welcome” component. For clarity, I recommend removing the <Welcome /> component from the route to have a clean starting point for your project. To add a bit of design, I’d recommend setting up Tailwind for Astro (though, you’re welcome to style your component however you would like including a style block in the component). npx astro add tailwind Once this is complete, you’re ready to write your first component. Creating the basic Heading component Let’s start by defining exactly what options we want to provide in our developer experience. For this component, we want to let developers choose from any HTML heading level (H1-H6). We also want them to be able to choose a specific font size and font weight — it may seem obvious now, but we don’t want people choosing a specific heading level for the weight and font size, so we separate those concerns. Finally, we want to make sure that any additional HTML attributes can be passed through to our component. There are few things worse than having a component and then not being able to do basic functionality later. Using Dynamic tags to create the HTML element Let’s start by creating a simple component that allows the user to dynamically choose the HTML element they want to use. Create a new component at ./src/components/Heading.astro. --- // ./src/component/Heading.astro const { as } = Astro.props; const As = as; --- <As> <slot /> </As> To use a prop as a dynamic element name, we need the variable to start with a capital letter. We can define this as part of our naming convention and make the developer always capitalize this prop in their use, but that feels inconsistent with how most naming works within props. Instead, let’s keep our focus on the DX, and take that burden on for ourselves. In order to dynamically register an HTML element in our component, the variable must start with a capital letter. We can convert that in the frontmatter of our component. We then wrap all the children of our component in the <As> component by using Astro’s built-in <slot /> component. Now, we can use this component in our index route and render any HTML element we want. Import the component at the top of the file, and then add <h1> and <h2> elements to the route. --- // ./src/pages/index.astro import Layout from '../layouts/Layout.astro'; import Heading from '../components/Heading.astro'; --- <Layout> <Heading as="h1">Hello!</Heading> <Heading as="h2">Hello world</Heading> </Layout> This will render them correctly on the page and is a great start. Adding more custom props as a developer interface Let’s clean up the element choosing by bringing it inline to our props destructuring, and then add in additional props for weight, size, and any additional HTML attributes. To start, let’s bring the custom element selector into the destructuring of the Astro.props object. At the same time, let’s set a sensible default so that if a developer forgets to pass this prop, they still will get a heading. --- // ./src/component/Heading.astro const { as: As="h2" } = Astro.props; --- <As> <slot /> </As> Next, we’ll get weight and size. Here’s our next design choice for our component system: do we make our developers know the class names they need to use or do we provide a generic set of sizes and do the mapping ourselves? Since we’re building a system, I think it’s important to move away from class names and into a more declarative setup. This will also future-proof our system by allowing us to change out the underlying styling and class system without affecting the DX. Not only do we future proof it, but we also are able to get around a limitation of Tailwind by doing this. Tailwind, as it turns out can’t handle dynamically-created class strings, so by mapping them, we solve an immediate issue as well. In this case, our sizes will go from small (sm) to six times the size (6xl) and our weights will go from “light” to “bold”. Let’s start by adjusting our frontmatter. We need to get these props off the Astro.props object and create a couple objects that we can use to map our interface to the proper class structure. --- // ./src/component/Heading.astro const weights = { "bold": "font-bold", "semibold": "font-semibold", "medium": "font-medium", "light": "font-light" } const sizes= { "6xl": "text-6xl", "5xl": "text-5xl", "4xl": "text-4xl", "3xl": "text-3xl", "2xl": "text-2xl", "xl": "text-xl", "lg": "text-lg", "md": "text-md", "sm": "text-sm" } const { as: As="h2", weight="medium", size="2xl" } = Astro.props; --- Depending on your use case, this amount of sizes and weights might be overkill. The great thing about crafting your own component system is that you get to choose and the only limitations are the ones you set for yourself. From here, we can then set the classes on our component. While we could add them in a standard class attribute, I find using Astro’s built-in class:list directive to be the cleaner way to programmatically set classes in a component like this. The directive takes an array of classes that can be strings, arrays themselves, objects, or variables. In this case, we’ll select the correct size and weight from our map objects in the frontmatter. --- // ./src/component/Heading.astro const weights = { bold: "font-bold", semibold: "font-semibold", medium: "font-medium", light: "font-light", }; const sizes = { "6xl": "text-6xl", "5xl": "text-5xl", "4xl": "text-4xl", "3xl": "text-3xl", "2xl": "text-2xl", xl: "text-xl", lg: "text-lg", md: "text-md", sm: "text-sm", }; const { as: As = "h2", weight = "medium", size = "2xl" } = Astro.props; --- <As class:list={[ sizes[size], weights[weight] ]} > <slot /> </As> Your front-end should automatically shift a little in this update. Now your font weight will be slightly thicker and the classes should be applied in your developer tools. From here, add the props to your index route, and find the right configuration for your app. --- // ./src/pages/index.astro import Layout from '../layouts/Layout.astro'; import Heading from '../components/Heading.astro'; --- <Layout> <Heading as="h1" size="6xl" weight="light">Hello!</Heading> <Heading as="h3" size="xl" weight="bold">Hello world</Heading> </Layout> Our custom props are finished, but currently, we can’t use any default HTML attributes, so let’s fix that. Adding HTML attributes to the component We don’t know what sorts of attributes our developers will want to add, so let’s make sure they can add any additional ones they need. To do that, we can spread any other prop being passed to our component, and then add them to the rendered component. --- // ./src/component/Heading.astro const weights = { // etc. }; const sizes = { // etc. }; const { as: As = "h2", weight = "medium", size = "md", ...attrs } = Astro.props; --- <As class:list={[ sizes[size], weights[weight] ]} {...attrs} > <slot /> </As> From here, we can add any arbitrary attributes to our element. --- // ./src/pages/index.astro import Layout from '../layouts/Layout.astro'; import Heading from '../components/Heading.astro'; --- <Layout> <Heading id="my-id" as="h1" size="6xl" weight="light">Hello!</Heading> <Heading class="text-blue-500" as="h3" size="xl" weight="bold">Hello world</Heading> </Layout> I’d like to take a moment to truly appreciate one aspect of this code. Our <h1>, we add an id attribute. No big deal. Our <h3>, though, we’re adding an additional class. My original assumption when creating this was that this would conflict with the class:list set in our component. Astro takes that worry away. When the class is passed and added to the component, Astro knows to merge the class prop with the class:list directive and automatically makes it work. One less line of code! In many ways, I like to consider these additional attributes as “escape hatches” in our component library. Sure, we want our developers to use our tools exactly as intended, but sometimes, it’s important to add new attributes or push our design system’s boundaries. For this, we allow them to add their own attributes, and it can create a powerful mix. It looks done, but are we? At this point, if you’re following along, it might feel like we’re done, but we have two issues with our code right now: (1) our component has “red squiggles” in our code editor and (2) our developers can make a BIG mistake if they choose. The red squiggles come from type errors in our component. Astro gives us TypeScript and linting by default, and sizes and weights can’t be of type: any. Not a big deal, but concerning depending on your deployment settings. The other issue is that our developers don’t have to choose a heading element for their heading. I’m all for escape hatches, but only if they don’t break the accessibility and SEO of my site. Imagine, if a developer used this with a div instead of an h1 on the page. What would happen?We don’t have to imagine, make the change and see. It looks identical, but now there’s no <h1> element on the page. Our semantic structure is broken, and that’s bad news for many reasons. Let’s use typing to help our developers make the best decisions and know what options are available for each prop. Adding types to the component To set up our types, first we want to make sure we handle any HTML attributes that come through. Astro, again, has our backs and has the typing we need to make this work. We can import the right HTML attribute types from Astro’s typing package. Import the type and then we can extend that type for our own props. In our example, we’ll select the h1 types, since that should cover most anything we need for our headings. Inside the Props interface, we’ll also add our first custom type. We’ll specify that the as prop must be one of a set of strings, instead of just a basic string type. In this case, we want it to be h1–h6 and nothing else. --- // ./src/component/Heading.astro import type { HTMLAttributes } from 'astro/types'; interface Props extends HTMLAttributes<'h1'> { as: "h1" | "h2" | "h3" | "h4" | "h5" | "h6"; } //... The rest of the file --- After adding this, you’ll note that in your index route, the <h1> component should now have a red underline for the as="div" property. When you hover over it, it will let you know that the as type does not allow for div and it will show you a list of acceptable strings. If you delete the div, you should also now have the ability to see a list of what’s available as you try to add the string. While it’s not a big deal for the element selection, knowing what’s available is a much bigger deal to the rest of the props, since those are much more custom. Let’s extend the custom typing to show all the available options. We also denote these items as optional by using the ?:before defining the type. While we could define each of these with the same type functionality as our as type, that doesn’t keep this future proofed. If we add a new size or weight, we’d have to make sure to update our type. To solve this, we can use a fun trick in TypeScript: keyof typeof. There are two helper functions in TypeScript that will help us convert our weights and sizes object maps into string literal types: typeof: This helper takes an object and converts it to a type. For instance typeof weights would return type { bold: string, semibold: string, ...etc} keyof: This helper function takes a type and returns a list of string literals from that type’s keys. For instance keyof type { bold: string, semibold: string, ...etc} would return "bold" | "semibold" | ...etc which is exactly what we want for both weights and sizes. --- // ./src/component/Heading.astro import type { HTMLAttributes } from 'astro/types'; interface Props extends HTMLAttributes<'h1'> { as: "h1" | "h2" | "h3" | "h4" | "h5" | "h6"; weight?: keyof typeof weights; size?: keyof typeof sizes; } // ... The rest of the file Now, when we want to add a size or weight, we get a dropdown list in our code editor showing exactly what’s available on the type. If something is selected, outside the list, it will show an error in the code editor helping the developer know what they missed. While none of this is necessary in the creation of Astro components, the fact that it’s built in and there’s no additional tooling to set up means that using it is very easy to opt into. I’m by no means a TypeScript expert, but getting this set up for each component takes only a few additional minutes and can save a lot of time for developers down the line (not to mention, it makes onboarding developers to your system much easier). Crafting Strong DX With Astro Components and TypeScript originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  29. by: Chris Coyier Mon, 31 Mar 2025 15:25:36 +0000 New CSS features help us in all sorts of different ways, but here we’re going to look at them when they power a specific type of component, or make a type of component newly possible with less or no JavaScript. A single element CSS donut timer/countdown timer by Ben Frain — The surely least-used gradient type, conic-gradient() is used here to make donut (I’d call them charts) which when animated behave like a timer. This kind of thing changes the web in that we don’t need to reach for weightier or more complex technology to do something like this, which is actually visually pretty simple. Sliding 3D Image Frames In CSS by Temani Afif — This one isn’t rife with new CSS features, but that almost makes it more mind blowing to me. In the HTML is only an <img> but the end result is a sliding-door on a 3D box that slides up to reveal the photo. This requires multiple backgrounds including a conic-gradient, a box-shadow and very exotic clip-path, not to mention a transition for the movement. ⭐️ Carousel Configurator by the Chrome Gang — This one is wild. It only works in Google Chrome Canary because of experimental features. Scrolling snapping is there of course, and that’s neat and fundamental to carousels, but the other three features are, as I said, wild. (1) a ::scroll-button which appends, apparently, a fully interactive button that advances scroll by one page. (2) a ::scroll-marker and group pseudo element which are apparently a replacement for a scrollbar and are instead fully interactive stated buttons that represent how many pages a scrolling container has. (3) an interactivity: inert; declaration which you can apply within an animation-timeline such that off-screen parts of the carousel are not interactive. All this seems extremely experimental but I’m here for it. Hide a header when scrolling down, show it again when scrolling up by Bramus Van Damme — With scroll-driven animations, you can “detect” if a page is being scrolled up or down, and in this case set the value of custom properties based on that information. Then with style() queries, set other styles, like hiding or showing a header. The big trick here is persisting the styles even when not scrolling, which involves an infinite transition-delay. This is the magic that keeps the header hidden until you scroll back up. Center Items in First Row with CSS Grid by Ryan Mulligan — When you’re using CSS Grid, for the most part, you set up grid lines and place items exactly along those grid lines. That’s why it’s weird to see “staggered” looking layouts, which is what it looks like when one row of items doesn’t line up exactly with the one below it. But if you just make twice as many columns as you need and offset by one when you need to, you’ve got this kind of control. The trick is figuring out when.

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.