Hello, Guest! 👋 You're just a few clicks away from joining an exclusive space for tech enthusiasts, problem-solvers, and lifelong learners like you.
🔐Why Join? By becoming a member of CodeNameJessica, you’ll get access to: ✅ In-depth discussions on Linux, Security, Server Administration, Programming, and more ✅ Exclusive resources, tools, and scripts for IT professionals ✅ A supportive community of like-minded individuals to share ideas, solve problems, and learn together ✅Project showcases, guides, and tutorials from our members ✅Personalized profiles and direct messaging to collaborate with other techies
🌐Sign Up Now and Unlock Full Access! As a guest, you're seeing just a glimpse of what we offer. Don't miss out on the complete experience! Create a free account today and start exploring everything CodeNameJessica has to offer.
In today’s fast-paced business world, we’re seeing a major shift in how companies operate and manage their resources. At the center of this transformation is AI SaaS (Artificial Intelligence Software as a Service). These cloud-based AI solutions are changing the game for businesses of all sizes. No longer do companies need massive budgets or technical teams to access powerful AI tools. Instead, they can subscribe to services that offer ready-to-use AI capabilities for a fraction of the traditional cost. Tools like an ai prompt generator are making it easier than ever to harness AI’s potential. This shift isn’t just convenient–it’s completely reshaping business economics in ways we couldn’t have imagined just a few years ago.
Breaking Down Cost Barriers
Remember when implementing new technology meant huge upfront investments? Those days are quickly fading. With AI SaaS, the economic model has fundamentally changed. Companies can now access sophisticated AI tools through subscription models that spread costs over time. Using an openai pricing calculator can help businesses accurately forecast their AI expenditures. This shift from capital expenditure to operational expenditure is a game-changer, especially for smaller businesses. We’re noticing that even startups can now compete with established players by leveraging the same powerful AI tools without breaking the bank.
The pay-as-you-go model also means businesses aren’t locked into expensive systems that might become outdated. Instead, they can scale services up or down as needed, ensuring they only pay for what they actually use. This flexibility creates a much healthier cash flow situation for businesses navigating uncertain economic times.
Productivity Revolution
Let’s talk about what happens when AI takes over routine tasks. We’re seeing dramatic shifts in how work gets done. Tasks that once required hours of human attention–like data entry, basic customer service, or initial sales qualification–can now be handled by AI systems. An ai prompt manager can help organizations maintain and optimize their AI interactions across different departments. This doesn’t just save time; it fundamentally changes the economics of business operations.
When employees are freed from repetitive tasks, they can focus on high-value work that actually drives business growth. The numbers speak for themselves: many companies report productivity increases of 30-40% after implementing AI SaaS solutions. This isn’t just about doing the same work faster–it’s about completely reimagining what’s possible with the same number of employees.
Democratizing Advanced Capabilities
Perhaps the most transformative aspect of AI SaaS is how it’s leveling the playing field. We’re witnessing a democratization of capabilities that were once available only to tech giants with massive R&D budgets. Now, a small marketing firm can access the same quality of predictive analytics as a Fortune 500 company. A local retailer can implement sophisticated inventory management systems that rival major chains.
This accessibility is creating new economic opportunities across the business landscape. Companies that might have been pushed out of competitive markets due to technological disadvantages can now stay in the game. We’re essentially seeing a redistribution of competitive advantage, where strategic implementation of AI tools can matter more than sheer company size or historical market dominance.
Data-Driven Decision Making
The economics of decision-making has also been transformed by AI SaaS. In the past, business leaders often relied on intuition and limited data samples to make major strategic choices. Now, we have access to AI systems that can process vast amounts of information and identify patterns humans might miss.
This shift dramatically reduces the cost of poor decisions. When companies can test scenarios, predict outcomes, and analyze trends with AI-powered tools, they’re less likely to make expensive mistakes. We’re finding that businesses using AI for decision support typically see reductions in failed initiatives and improvements in successful outcomes. The economic impact of better decisions compound over time, creating significant long-term advantages for early adopters.
New Business Models
AI SaaS isn’t just changing how existing businesses operate–it’s creating entirely new economic models and opportunities. We’re seeing the emergence of micro-businesses built entirely around AI capabilities, offering specialized services that wouldn’t have been viable without these technologies.
For instance, small teams can now offer enterprise-grade analytics services by leveraging AI platforms. Solo entrepreneurs can create sophisticated digital products with AI assistance that would have required entire development teams in the past. These new business models are reshaping industry structures and creating economic value in previously untapped areas.
Challenges and Considerations
While the economic benefits are clear, we should acknowledge that this transformation isn’t without challenges. Businesses need to carefully navigate implementation costs, training requirements, and integration with existing systems. There’s also the ongoing challenge of choosing the right AI SaaS partners in an increasingly crowded marketplace.
Data privacy concerns and regulatory compliance add another layer of economic consideration. Companies must balance the benefits of AI-powered insights with the costs of ensuring proper data governance. Implementing a harmful content detector can help organizations maintain ethical standards and protect their brand reputation. We’re finding that successful organizations view these not just as compliance issues but as opportunities to build trust and differentiation in the market.
Looking Forward
As we look to the future, the economic impact of AI SaaS will likely accelerate. We expect to see even more sophisticated tools becoming available at increasingly accessible price points. The gap between early adopters and laggards may widen, creating stronger economic incentives for businesses to embrace these technologies sooner rather than later.
The most successful organizations will be those that view AI SaaS not just as a cost-saving measure but as a strategic asset that can transform their business models. We’re already seeing evidence that companies taking this approach are outperforming their peers in terms of growth and profitability.
Conclusion
The transformation of business economics through AI SaaS represents one of the most significant shifts in how companies operate and compete. From reducing costs and improving productivity to enabling entirely new business models, these technologies are creating opportunities across every industry.
For business leaders, the message is clear: understanding and strategically implementing AI SaaS isn’t just a technology decision–it’s a fundamental business imperative with far-reaching economic implications. As these tools continue to evolve, they’ll increasingly separate market leaders from the rest of the pack. The question isn’t whether AI SaaS will transform your business economics, but how quickly you’ll adapt to this new reality.
Publer is your ultimate social media management superhero, designed to simplify and enhance your social media presence. Whether you’re managing multiple brands or looking to streamline your social media strategy, Publer offers a comprehensive suite of tools to help you collaborate, schedule, and analyze your posts across various platforms. From Facebook and Instagram to TikTok and Twitter, Publer ensures that your social media campaigns are efficient, effective, and engaging.
Features
Link in Bio: Draw attention with a unique link on Instagram Bio.
Calendar View: View, create, and organize all upcoming social media posts.
Workspaces: Collaborate with other members to manage multiple brands.
Browser Extension: Create & schedule new social media posts from any website.
Analytics: Make data-driven decisions using powerful social media analytics.
Curate Posts: Create and preview social media posts in real-time.
Bulk Scheduling: Schedule up to 500 posts with a CSV file or other bulk options.
Recycling: Save time and recycle any top-performing content.
AI Assist: Unleash the power of AI on your social media.
Media Integrations: Design from scratch and organize all visual content.
RSS Feed: Automate new posts from your favorite RSS Feeds.
Free Tools: Includes photo & video downloader, bold & italic text generator, and threads to carousels converter (coming soon).
How it Works
Sign Up: Create a Publer account to get started.
Connect Accounts: Link your social media accounts including Facebook, Instagram, Twitter, LinkedIn, Pinterest, and more.
Create Posts: Use the intuitive interface to draft, preview, and schedule posts.
Collaborate: Utilize workspaces to collaborate with team members on content creation and scheduling.
Schedule: Plan your posts in advance using the calendar view and bulk scheduling options.
Analyze: Leverage the analytics feature to track performance and refine your strategy.
Benefits
Streamlined Workflow: Manage all your social media accounts from a single platform.
Collaborative Tools: Workspaces enhance team collaboration and efficiency.
Enhanced Creativity: AI Assist and media integrations boost your content creation process.
Data-Driven Decisions: Analytics provide insights to optimize your social media strategy.
Time-Saving: Features like bulk scheduling and recycling of top-performing content save valuable time.
Pricing
Free Plan: Basic features with limited access.
Professional Plan: Enhanced features and tools for growing businesses.
Business Plan: Advanced features for large teams and multiple brands.
Enterprise Plan: Custom solutions tailored to large-scale operations.
Review
Publer has garnered positive reviews for its user-friendly interface and robust feature set. Users appreciate the platform’s ability to streamline social media management tasks, from scheduling and analytics to collaboration and content creation. The integration of AI tools and media management features further enhances its value, making it a preferred choice for businesses of all sizes looking to optimize their social media strategies.
Conclusion
Publer stands out as a comprehensive social media management platform that caters to the diverse needs of businesses and social media managers. With its powerful features, collaborative tools, and data-driven insights, Publer empowers users to take control of their social media presence and achieve their marketing goals. Whether you’re a small business or a large enterprise, Publer provides the tools you need to succeed in the dynamic world of social media.
In an age where visual content is king, having high-quality images and videos is paramount. Remini, an AI-powered photo enhancer, offers a cutting-edge solution for transforming low-quality visuals into stunning HD masterpieces. From restoring old photos to enhancing modern digital content, Remini provides a comprehensive suite of tools that cater to various needs.
Features
Enhance: Instantly improve the overall quality of your photos and videos, making them sharper and more vibrant.
Unblur & Sharpener: Remove motion blur, camera shake, and focus issues to create clear and sharp images.
Denoiser: Eliminate grain and noise from photos to achieve crystal-clear visuals.
Old Photos Restorer: Revive blurred, faded, and damaged photos, bringing them back to life with incredible detail.
Image Enlarger: Upscale photos and videos up to 2x their original size without losing quality.
Color Fixer: Enhance the color tones in your photos to produce natural and vivid images.
Face Enhancer: Improve facial details in portraits, ensuring a natural and realistic look.
Background Enhancer: Increase the quality of every detail in the background of your images.
Low Quality Enhancer: Boost the quality of low-resolution images to make them appear professional.
Video Enhancer: Enhance and enlarge your videos with AI-powered technology.
How It Works
Using Remini is straightforward:
Upload Your Image/Video: Start by uploading the photo or video you wish to enhance.
Select the Enhancement Tool: Choose from Remini’s suite of enhancement tools based on your specific needs.
Apply the Enhancement: Let Remini’s AI technology work its magic to transform your visual content.
Download the Enhanced Content: Once the enhancement is complete, download your high-quality image or video.
Benefits
User-Friendly Interface: Remini is designed to be intuitive, making it accessible to users of all skill levels.
Professional-Grade Enhancements: Achieve studio-quality enhancements with a few simple clicks.
Time-Efficient: Save hours of manual editing with instant AI-powered improvements.
Versatility: Suitable for various applications, including social media, heritage preservation, printing services, e-commerce, education, and magazines.
High Satisfaction: Millions of users worldwide trust Remini for its consistent, high-quality results.
Pricing
Remini offers both free and premium features:
Free Version: Access basic enhancement tools with ads.
Premium Membership: Unlock all advanced features and enjoy an ad-free experience for a subscription fee.
Review
Users across different platforms have praised Remini for its impressive capabilities:
Android User: “Never seen such a high-quality app that is free. It’s easy to access and fun. Highly recommended!”
iOS User: “This app is amazing. It dramatically improved an old photo of my grandmother. I’m delighted.”
Web User: “Remini is the quickest solution for enhancing photos, much faster than manual editing.”
Conclusion
Remini stands out as an essential tool for anyone looking to elevate their visual content. Its powerful AI-driven enhancements make it possible to transform low-quality images and videos into professional-grade masterpieces effortlessly. Whether you’re a social media enthusiast, a historian preserving family photos, or a professional needing high-quality visuals, Remini has you covered.
Magnific AI offers a revolutionary tool for transforming and upscaling images to achieve stunning high-resolution results. This AI-driven software is designed to enhance photos, illustrations, and digital art with an impressive level of detail. Whether you’re a professional artist, graphic designer, or just someone looking to improve personal photos, Magnific AI promises to take your creations to the next level with ease and precision.
Features
Magnific AI boasts a range of powerful features that make it a standout tool for image enhancement:
Advanced AI Technology: Utilizes cutting-edge AI to upscale and enhance images with remarkable detail.
Natural Language Prompts: Direct the upscaling process using descriptive text prompts.
Creativity Slider: Adjust the level of AI-generated details and creative enhancements.
HDR and Resemblance Sliders: Fine-tune the high-dynamic-range effects and resemblance to the original image.
Variety of Uses: Perfect for portraits, illustrations, video games, landscapes, films, and more.
User-Friendly Interface: Intuitive and accessible for creators of all skill levels.
How It Works
Upload Your Image: Start by uploading the image you want to upscale or enhance.
Set Parameters: Use the natural language prompt and sliders (Creativity, HDR, Resemblance) to define how you want the AI to process your image.
AI Processing: Magnific AI uses its advanced algorithms to transform your image based on the parameters set.
Download Enhanced Image: Once the process is complete, download the high-resolution, enhanced version of your image.
Benefits
High-Resolution Output: Achieve stunningly detailed and high-resolution images.
Creative Control: Customize the enhancement process with intuitive controls.
Versatility: Suitable for a wide range of applications, from personal photos to professional digital art.
Time-Saving: Quickly and efficiently upscale and enhance images without manual editing.
Cost-Efficiency: Offers a cost-effective solution compared to traditional methods of image enhancement.
Pricing
Magnific AI offers various pricing plans to suit different needs:
Pro Plan: $39 per month
Premium Plan: $99 per month
Business Plan: $299 per month
Annual subscriptions are available with a two-month discount. Subscriptions can be canceled at any time through the billing portal.
Review
Users of Magnific AI have praised its ability to transform images with minimal effort. The intuitive interface and powerful AI-driven enhancements make it a favorite among photographers, digital artists, and businesses. While some users have noted occasional artifacts, these can generally be managed with the available sliders and prompts.
Conclusion
Magnific AI stands out as a powerful tool for anyone looking to enhance and upscale images with ease and precision. Its advanced AI technology, user-friendly interface, and versatile applications make it an invaluable resource for both professionals and enthusiasts. Whether for personal projects or professional work, Magnific AI delivers results that truly feel like magic.
In my last article on “Revisiting CSS Multi-Column Layout”, I mentioned that almost twenty years have flown by since I wrote my first book, Transcending CSS. In it, I explained how and why to use what were, at the time, an emerging CSS property.
I was very excited about the possibilities this new property would offer. After all, we could now add images to the borders of any element, even table cells and rows (unless their borders had been set to collapse).
Since then, I’ve used border-image regularly. Yet, it remains one of the most underused CSS tools, and I can’t, for the life of me, figure out why. Is it possible that people steer clear of border-image because its syntax is awkward and unintuitive? Perhaps it’s because most explanations don’t solve the type of creative implementation problems that most people need to solve. Most likely, it’s both.
I’ve recently been working on a new website for Emmy-award-winning game composer Mike Worth. He hired me to create a highly graphical design that showcases his work, and I used border-image throughout.
Finally, I could insert an entirely CSS-generated conical, linear, or radial gradient into my border:
border-image-source: conical-gradient(…);
Tip: It’s useful to remember that a browser renders a border-image above an element’s background and box-shadow but below its content. More on that a little later.
Slicing up a border-image
Now that I’ve specified the source of a border image, I can apply it to a border by slicing it up and using the parts in different positions around an element. This can be the most baffling aspect for people new to border-image.
Most border-image explanations show an example where the pieces will simply be equally-sized, like this:
However, a border-image can be developed from any shape, no matter how complex or irregular.
Instead of simply inserting an image into a border and watching it repeat around an element, invisible cut-lines slice up a border-image into nine parts. These lines are similar to the slice guides found in graphics applications. The pieces are, in turn, inserted into the nine regions of an element’s border.
The border-image-slice property defines the size of each slice by specifying the distance from each edge of the image. I could use the same distance from every edge:
border-image-slice: 65
I can combine top/bottom and left/right values:
border-image-slice: 115 65;
Or, I can specify distance values for all four cut-lines, running clockwise: top, right, bottom, left:
border-image-slice: 65 65 115 125;
The top-left of an image will be used on the top-left corner of an element’s border. The bottom-right will be used on the bottom-right, and so on.
I don’t need to add units to border-image-slice values when using a bitmap image as the browser correctly assumes bitmaps use pixels. The SVG viewBox makes using them a little different, so I also prefer to specify their height and width:
<svg height="600px" width="600px">…</svg>
Don’t forget to set the widths of these borders, as without them, there will be nowhere for a border’s image to display:
border-image-width: 65px 65px 115px 125px;
Filling in the center
So far, I’ve used all four corners and sides of my image, but what about the center? By default, the browser will ignore the center of an image after it’s been sliced. But I can put it to use by adding the fill keyword to my border-image-slice value:
border-image-slice: 65px 65px 115px 125px fill;
Setting up repeats
With the corners of my border images in place, I can turn my attention to the edges between them. As you might imagine, the slice at the top of an image will be placed on the top edge. The same is true of the right, bottom, and left edges. In a flexible design, we never know how wide or tall these edges will be, so I can fine-tune how images will repeat or stretch when they fill an edge.
Stretch: When a sliced image is flat or smooth, it can stretch to fill any height or width. Even a tiny 65px slice can stretch to hundreds or thousands of pixels without degrading.
border-image-repeat: stretch;
Repeat: If an image has texture, stretching it isn’t an option, so it can repeat to fill any height or width.
border-image-repeat: repeat;
Round: If an image has a pattern or shape that can’t be stretched and I need to match the edges of the repeat, I can specify that the repeat be round. A browser will resize the image so that only whole pieces display inside an edge.
border-image-repeat: round;
Space: Similar to round, when using the space property, only whole pieces will display inside an edge. But instead of resizing the image, a browser will add spaces into the repeat.
border-image-repeat: space;
When I need to specify a separate stretch, repeat, round, or space value for each edge, I can use multiple keywords:
border-image-repeat: stretch round;
Outsetting a border-image
There can be times when I need an image to extend beyond an element’s border-box. Using the border-image-outset property, I can do just that. The simplest syntax extends the border image evenly on all sides by 10px:
border-image-outset: 10px;
Of course, there being four borders on every element, I could also specify each outset individually:
border-image-outset: 20px 10px;
/* or */
border-image-outset: 20px 10px 0;
border-image in action
Mike Worth is a video game composer who’s won an Emmy for his work. He loves ’90s animation — especially Disney’s Duck Tales — and he asked me to create custom artwork and develop a bold, retro-style design.
My challenge when developing for Mike was implementing my highly graphical design without compromising performance, especially on mobile devices. While it’s normal in CSS to accomplish the same goal in several ways, here, border-image often proved to be the most efficient.
Decorative buttons
The easiest and most obvious place to start was creating buttons reminiscent of stone tablets with chipped and uneven edges.
I created an SVG of the tablet shape and added it to my buttons using border-image:
I set the border-image-repeat on all edges to stretch and the center slice to fill so these stone tablet-style buttons expand along with their content to any height or width.
Article scroll
I want every aspect of Mike’s website design to express his brand. That means continuing the ’90s cartoon theme in his long-form content by turning it into a paper scroll.
The markup is straightforward with just a single article element:
<article>
<!-- ... -->
</article>
But, I struggled to decide how to implement the paper effect. My first thought was to divide my scroll into three separate SVG files (top, middle, and bottom) and use pseudo-elements to add the rolled up top and bottom parts of the scroll. I started by applying a vertically repeating graphic to the middle of my article:
While this implementation worked as expected, using two pseudo-elements and three separate SVG files felt clumsy. However, using border-image, one SVG, and no pseudo-elements feels more elegant and significantly reduces the amount of code needed to implement the effect.
I started by creating an SVG of the complete tablet shape:
And I worked out the position of the four cut-lines:
Then, I inserted this single SVG into my article’s border by first selecting the source, slicing the image, and setting the top and bottom edges to stretch and the left and right edges to round:
The result is a flexible paper scroll effect which adapts to both the viewport width and any amount or type of content.
Home page overlay
My final challenge was implementing the action-packed graphic I’d designed for Mike Worth’s home page. This contains a foreground SVG featuring Mike’s orangutan mascot and a zooming background graphic:
I wanted this graphic to spin and add subtle movement to the panel, so I applied a simple CSS animation to the pseudo-element:
@keyframes spin-bg {
from { transform: rotate(0deg); }
to { transform: rotate(360deg); }
}
section:before {
animation: spin-bg 240s linear infinite;
}
Next, I added a CSS mask to fade the edges of the zooming graphic into the background. The CSS mask-image property specifies a mask layer image, which can be a PNG image, an SVG image or mask, or a CSS gradient:
At this point, you might wonder where a border image could be used in this design. To add more interactivity to the graphic, I wanted to reduce its opacity and change its color — by adding a colored gradient overlay — when someone interacts with it. One of the simplest, but rarely-used, methods for applying an overlay to an element is using border-image. First, I added a default opacity and added a brief transition:
Then, on hover, I reduced the opacity to .5 and added a border-image:
section:hover::before {
opacity: .5;
border-image: fill 0 linear-gradient(rgba(0,0,255,.25),rgba(255,0,0,1));
}
You may ponder why I’ve not used the other border-image values I explained earlier, so I’ll dissect that declaration. First is the border-image-slice value, where zero pixels ensures that the eight corners and edges stay empty. The fill keyword ensures the middle section is filled with the linear gradient. Second, the border-image-source is a CSS linear gradient that blends blue into red. A browser renders this border-image above the background but behind the content.
Conclusion: You should take a fresh look at border-image
The border-image property is a powerful, yet often overlooked, CSS tool that offers incredible flexibility. By slicing, repeating, and outsetting images, you can create intricate borders, decorative elements, and even dynamic overlays with minimal code.
In my work for Mike Worth’s website, border-image proved invaluable, improving performance while maintaining a highly graphical aesthetic. Whether used for buttons, interactive overlays, or larger graphic elements, border-image can create visually striking designs without relying on extra markup or multiple assets.
If you’ve yet to experiment with border-image, now’s the time to revisit its potential and add it to your design toolkit.
Often referred to as one of the pioneers of web design, Andy Clarke has been instrumental in pushing the boundaries of web design and is known for his creative and visually stunning designs. His work has inspired countless designers to explore the full potential of product and website design.
Andy’s written several industry-leading books, including Transcending CSS, Hardboiled Web Design, and Art Direction for the Web. He’s also worked with businesses of all sizes and industries to achieve their goals through design.
Visit Andy’s studio, Stuff & Nonsense, and check out his Contract Killer, the popular web design contract template trusted by thousands of web designers and developers.
I’ve seen a handful of recent posts talking about the utility of the :is() relational pseudo-selector. No need to delve into the details other than to say it can help make compound selectors a lot more readable.
:is(section, article, aside, nav) :is(h1, h2, h3, h4, h5, h6) {
color: #BADA55;
}
/* ... which would be the equivalent of: */
section h1, section h2, section h3, section h4, section h5, section h6,
article h1, article h2, article h3, article h4, article h5, article h6,
aside h1, aside h2, aside h3, aside h4, aside h5, aside h6,
nav h1, nav h2, nav h3, nav h4, nav h5, nav h6 {
color: #BADA55;
}
There’s just one catch: the specificity. The selector’s specificity matches the most specific selector in the function’s arguments. That’s not a big deal when working with a relatively flat style structure containing mostly element and class selectors, but if you toss an ID in there, then that’s the specificity you’re stuck with.
What if you don’t want that? Some articles suggest nesting selectors instead which is cool but not quite with the same nice writing ergonomics.
There’s where I want to point to the :where() selector instead! It’s the exact same thing as :is() but without the specificity baggage. It always carries a specificity score of zero. You might even think of it as a sort of specificity reset.
Backing up the database in MS SQL Server is vital to safeguard and recover the data in case of scenarios, like hardware failure, server failure, database corruption, etc. MS SQL Server provides different types of backups, such as differential, transactional, and full backup. A full backup allows you to restore the database in exactly the same form as it was at the time of creating the backup. The differential backup stores only the edits since the last full backup was created, whereas the transaction log backup is an incremental backup that stores all the transaction logs.
When you restore SQL database backup, SQL Server offers two options to control the state of the database after restore. These are:
RESTORE WITH RECOVERY
When you use the RESTORE WITH RECOVERY option, it indicates no more restores are required and the state of database changes to online after the restore operation.
RESTORE WITH NORECOVERY
You can select the WITH NORECOVERY option when you want to continue restoring additional backup files, like transactional or differential. It changes the database to restoring state until it’s recovered.
Now, let’s learn how to use the WITH RECOVERY and NORECOVERY options when restoring the database.
How to Restore MS SQL Server Database with the RECOVERY Option?
You can use the WITH RECOVERY option to restore a database from full backup. It is the default option in the Restore Database window and is used when restoring the last backup or only the backup in a restore sequence. You can restore database WITH RECOVERY option by using SQL Server Management Studio (SSMS) or T-SQL commands.
1. Restore Database with RECOVERY Option using SSMS
If you want to restore database without writing code and scripts, then you can use the graphical user interface in SSMS. Here are the steps to restore database WITH RECOVERY using SSMS:
Open SSMS and connect to your SQL Server instance.
Go to Object Explorer, expand databases, and right-click on the database.
Click Tasks > Restore.
On the Restore database page, under General, select the database you want to restore and the available backup.
Next, on the same page, click Options.
In the Options window, select the recovery state as RESTORE WITH RECOVERY. Click OK.
2. Restore Database with RECOVERY Option using T-SQL Command
If you have a large number of operations that need to be managed or you want to automate the tasks, then you can use T-SQL commands. You can use the below T-SQL command to restore the database with the RECOVERY option.
RESTORE DATABASE [DBName] FROM DISK = 'C:\Backup\DB.bak' WITH RECOVERY;
How to Restore MS SQL Server Database with NORECOVERY Option?
You can use the NORECOVERY option to restore multiple backup files. For example, if your system fails and you need to restore the SQL Server database to the point just before the failure occurred, then you need a multi-step restore. In this, each backup should be in a sequence, like Full Backup > Differential > Transaction log. Here, you need to select the database in NORECOVERY mode except for the last one. This option changes the state of the database to RESTORING and makes the database inaccessible to the users unless additional backups are restored. You can restore the database with the NORECOVERY option by using SSMS or T-SQL commands.
1. Using T-SQL Commands
Here are the steps to restore MS SQL database with the NORECOVERY option by using T-SQL commands:
Step 1: First, you need to restore the Full Backup by using the below command:
RESTORE DATABASE [YourDatabaseName]
FROM DISK = N'C:\Path\To\Your\FullBackup.bak'WITH NORECOVERY,
STATS =10;
Step 2: Then, you need to restore the Differential Backup. Use the below command:
RESTORE DATABASE [YourDatabaseName]
FROM DISK = N'C:\Path\To\Your\DifferentialBackup.bak'WITH NORECOVERY,
STATS =10;
Step 3: Now, you have to restore the Transaction log backup (last backup WITH RECOVERY). Here’s the command:
RESTORE LOG [YourDatabaseName]
FROM DISK = N'C:\Path\To\Your\LastTransactionLogBackup.bak'WITH RECOVERY,
STATS =10;
2. Using SQL Server Management Studio (SSMS)
You can follow the below steps to restore the database with NORECOVERY option using the SSMS:
In SSMS, go to the Object Explorer, expand databases, and right-click the database node.
Click Tasks, select Restore, and click Database.
In the Restore Database page, select the source (i.e. full backup), and the destination. Click OK.
Next, add the information about the selected backup file in the option labelled - Backup sets to restore.
Next, on the same Restore Database page, click Options.
On the Options page, click RESTORE WITH NORECOVERY in the Recovery state field. Click OK.
What if the SQL Database Backup File is Corrupted?
Sometimes, the restore process can fail due to corruption in the database backup file. If your backup file is corrupted or you've not created a backup file, then you can take the help of a professional MS SQL repair tool, like Stellar Repair for MS SQL Technician. It is an advanced SQL repair tool to repair corrupt databases and backup files with complete integrity. The tool can repair backup files of any type - transactional log, full backup, and differential - without any file-size limitations. It can even restore deleted items from the backup database file. The tool is compatible with MS SQL Server version 2022, 2019, and earlier.
Conclusion
Above, we have discussed the stepwise process to restore the SQL database with RECOVERY and NORECOVERY options in MS SQL Server. If you face any error or issue while restoring the backup, then you can use a professional SQL repair tool, like Stellar Repair for MS SQL Technician. It can easily restore all the data from corrupt backup (.bak) files and save it in a new database file with complete precision. The tool can help resolve all the errors related to corruption in SQL database and backup (.bak) files.
Backing up the database in MS SQL Server is vital to safeguard and recover the data in case of scenarios, like hardware failure, server failure, database corruption, etc. MS SQL Server provides different types of backups, such as differential, transactional, and full backup. A full backup allows you to restore the database in exactly the same form as it was at the time of creating the backup. The differential backup stores only the edits since the last full backup was created, whereas the transaction log backup is an incremental backup that stores all the transaction logs.
When you restore SQL database backup, SQL Server offers two options to control the state of the database after restore. These are:
RESTORE WITH RECOVERY
When you use the RESTORE WITH RECOVERY option, it indicates no more restores are required and the state of database changes to online after the restore operation.
RESTORE WITH NORECOVERY
You can select the WITH NORECOVERY option when you want to continue restoring additional backup files, like transactional or differential. It changes the database to restoring state until it’s recovered.
Now, let’s learn how to use the WITH RECOVERY and NORECOVERY options when restoring the database.
How to Restore MS SQL Server Database with the RECOVERY Option?
You can use the WITH RECOVERY option to restore a database from full backup. It is the default option in the Restore Database window and is used when restoring the last backup or only the backup in a restore sequence. You can restore database WITH RECOVERY option by using SQL Server Management Studio (SSMS) or T-SQL commands.
1. Restore Database with RECOVERY Option using SSMS
If you want to restore database without writing code and scripts, then you can use the graphical user interface in SSMS. Here are the steps to restore database WITH RECOVERY using SSMS:
Open SSMS and connect to your SQL Server instance.
Go to Object Explorer, expand databases, and right-click on the database.
Click Tasks > Restore.
On the Restore database page, under General, select the database you want to restore and the available backup.
Next, on the same page, click Options.
In the Options window, select the recovery state as RESTORE WITH RECOVERY. Click OK.
2. Restore Database with RECOVERY Option using T-SQL Command
If you have a large number of operations that need to be managed or you want to automate the tasks, then you can use T-SQL commands. You can use the below T-SQL command to restore the database with the RECOVERY option.
RESTORE DATABASE [DBName] FROM DISK = 'C:\Backup\DB.bak' WITH RECOVERY;
How to Restore MS SQL Server Database with NORECOVERY Option?
You can use the NORECOVERY option to restore multiple backup files. For example, if your system fails and you need to restore the SQL Server database to the point just before the failure occurred, then you need a multi-step restore. In this, each backup should be in a sequence, like Full Backup > Differential > Transaction log. Here, you need to select the database in NORECOVERY mode except for the last one. This option changes the state of the database to RESTORING and makes the database inaccessible to the users unless additional backups are restored. You can restore the database with the NORECOVERY option by using SSMS or T-SQL commands.
1. Using T-SQL Commands
Here are the steps to restore MS SQL database with the NORECOVERY option by using T-SQL commands:
Step 1: First, you need to restore the Full Backup by using the below command:
RESTORE DATABASE [YourDatabaseName]
FROM DISK = N'C:\Path\To\Your\FullBackup.bak'WITH NORECOVERY,
STATS =10;
Step 2: Then, you need to restore the Differential Backup. Use the below command:
RESTORE DATABASE [YourDatabaseName]
FROM DISK = N'C:\Path\To\Your\DifferentialBackup.bak'WITH NORECOVERY,
STATS =10;
Step 3: Now, you have to restore the Transaction log backup (last backup WITH RECOVERY). Here’s the command:
RESTORE LOG [YourDatabaseName]
FROM DISK = N'C:\Path\To\Your\LastTransactionLogBackup.bak'WITH RECOVERY,
STATS =10;
2. Using SQL Server Management Studio (SSMS)
You can follow the below steps to restore the database with NORECOVERY option using the SSMS:
In SSMS, go to the Object Explorer, expand databases, and right-click the database node.
Click Tasks, select Restore, and click Database.
In the Restore Database page, select the source (i.e. full backup), and the destination. Click OK.
Next, add the information about the selected backup file in the option labelled - Backup sets to restore.
Next, on the same Restore Database page, click Options.
On the Options page, click RESTORE WITH NORECOVERY in the Recovery state field. Click OK.
What if the SQL Database Backup File is Corrupted?
Sometimes, the restore process can fail due to corruption in the database backup file. If your backup file is corrupted or you've not created a backup file, then you can take the help of a professional MS SQL repair tool, like Stellar Repair for MS SQL Technician. It is an advanced SQL repair tool to repair corrupt databases and backup files with complete integrity. The tool can repair backup files of any type - transactional log, full backup, and differential - without any file-size limitations. It can even restore deleted items from the backup database file. The tool is compatible with MS SQL Server version 2022, 2019, and earlier.
Conclusion
Above, we have discussed the stepwise process to restore the SQL database with RECOVERY and NORECOVERY options in MS SQL Server. If you face any error or issue while restoring the backup, then you can use a professional SQL repair tool, like Stellar Repair for MS SQL Technician. It can easily restore all the data from corrupt backup (.bak) files and save it in a new database file with complete precision. The tool can help resolve all the errors related to corruption in SQL database and backup (.bak) files.
Yes, you are reading that correctly: This is indeed a guide to styling counters with CSS. Some of you are cheering, “Finally!”, but I understand that the vast majority of you are thinking, “Um, it’s just styling lists.” If you are part of the second group, I get it. Before learning and writing more and more about counters, I thought the same thing. Now I am part of the first group, and by the end of this guide, I hope you join me there.
There are many ways to create and style counters, which is why I wanted to write this guide and also how I plan to organize it: going from the most basic styling to the top-notch level of customization, sprinkling in between some sections about spacing and accessibility. It isn’t necessary to read the guide in order — each section should stand by itself, so feel free to jump to any part and start reading.
Lists elements were among the first 18 tags that made up HTML. Their representation wasn’t defined yet but deemed fitting a bulleted list for unordered lists, and a sequence of numbered paragraphs for an ordered list.
Cool but not enough; soon people needed more from HTML alone and new list attributes were added throughout the years to fill in the gaps.
start
The start attribute takes an integer and sets from where the list should start:
We can use the type attribute to change the counter’s representation. It’s similar to CSS’s list-style-type, but it has its own limited uses and shouldn’t be used interchangeably*. Its possible values are:
Funny enough, the first CSS specification already included list-style-type and other properties to style lists, and it was released before HTML 3.2 — the first HTML spec that included some of the previous list attributes. This means that at least on paper, we had CSS list styling before HTML list attributes, so the answer isn’t as simple as “they were there before CSS.”
Without CSS, a static page (such as this guide) won’t be pretty, but at the very least, it should be readable. For example, the type attribute ensures that styled ordered lists won’t lose their meaning if CSS is missing, which is especially useful in legal or technical documents. Some attributes wouldn’t have a CSS equivalent until years later, including reversed, start and value.
Styling Simple Counters in CSS
For most use cases, styling lists in CSS doesn’t take more than a couple of rules, but even in that brevity, we can find different ways to style the same list.
::marker or ::before?
The ::marker pseudo-element represents the counter part of a list item. As a pseudo-element, we can set its content property to any string to change its counter representation:
li::marker {
content: "💜 ";
}
Bread
Milk
Butter
Apples
The content in pseudo-elements also accepts images, which allows us to create custom markers:
li::marker {
content: url("./logo.svg") " ";
}
bread
milk
butter
apples
By default, only li elements have a ::marker but we can give it to any element by setting its display property to list-item:
This will give each h4 a ::marker which we can change to any string:
List Title
However, ::marker is an odd case: it was described in the CSS spec more than 20 years ago, but only gained somewhat reliable support in 2020 and still isn’t fully supported in Safari. What’s worst, only font-related properties (such as font-size or color) are allowed, so we can’t change its margin or background-color.
This has led many to use ::before instead of ::marker, so you’ll see a lot of CSS in which the author got rid of the ::marker using list-style-type: none and used ::before instead:
The list-style-type property can be used to replace the ::marker‘s string. Unlike ::marker, list-style-type has been around forever and is most people’s go-to option for styling lists. It can take a lot of different counter styles that are built-in in browsers, but you will probably use one of the following:
For unordered lists:
disc
circle
square
ul {
list-style-type: square;
}
ul {
list-style-type: circle;
}
bread
milk
butter
apples
For ordered lists:
decimal
decimal-leading-zero
lower-roman
upper-roman
lower-alpha
upper-alpha
ol {
list-style-type: upper-roman;
}
ol {
list-style-type: lower-alpha;
}
It can also take none to remove the marker altogether, and since not long ago, it can also take a <string> for ul elements.
ul {
list-style-type: none;
}
ul {
list-style-type: "➡️ ";
}
Creating Custom Counters
For a long time, there wasn’t a CSS-equivalent to the HTML reverse, start or value attributes. So if we wanted to reverse or change the start of multiple lists, instead of a CSS class to rule them all, we had to change their HTML one by one. You can imagine how repetitive that would get.
Besides, list attributes simply had their limitations: we can’t change how they increment with each item and there isn’t an easy way to attach a prefix or suffix to the counter. And maybe the biggest reason of all is that there wasn’t a way to number things that weren’t lists!
Custom counters let us number any collection of elements with a whole new level of customization. The workflow is to:
Initiate the counter with the counter-reset property.
As I mentioned, we can make a list out of any collection of elements, and while this has its accessibility concerns, just for demonstration’s sake, let’s try to turn a collection of headings like this…
…into something that looks list-like. But just because we can make an element look like a list doesn’t always mean we should do it. Be sure to consider how the list will be announced by assistive technologies, like screen readers, and see the Accessibility section for more information.
Initiate counters: counter-reset
The counter-reset property takes two things: the name of the counter as a custom ident and the initial count as an integer. If the initial count isn’t given, then it will start at 0 by default:
.index {
counter-reset: index;
/* The same as */
counter-reset: index 0;
}
You can initiate several counters at once with a space-separated list and set a specific value for each one:
.index {
counter-reset: index another-counter 2;
}
This will start our index counter at 0 (the default) and another-counter at 2.
Set counters: counter-set
The counter-set works similar to counter-reset: it takes the counter’s name followed by an integer, but this time it will set the count for that element onwards. If the integer is omitted, it will set the counter to 0 by default.
h2:nth-child(2) {
counter-set: index;
/* same as */
counter-set: index 0;
}
And we can set several counters at once, as well:
h2:nth-child(3) {
counter-set: index 5 another-counter 10;
}
This will set the third h2 element’s index count to 5 and another-counter to 10.
If there isn’t an active counter with that name, counter-set will initiate it at 0.
Increment counters: counter-increment
Right now, we have our counter, but it will stagnate at 0 since we haven’t set which elements should increment it. We can use the counter-increment property for that, which takes the name of the counter and how much it should be incremented by. If we only write the counter’s name, it will increment it by 1.
In this case, we want each h2 title to increment the counter by one, and that should be as easy as setting counter-increment to the counter’s name:
h2 {
counter-increment: index;
/* same as */
counter-increment: index 1;
}
Just like with counter-reset, we can increment several counters at once in a space-separated list:
h2 {
counter-increment: index another-counter 2;
}
This will increment index by one and another-counter by two on each h2 element.
If there isn’t an active counter with that name, counter-increment will initiate it at 0.
Output simple lists: counter()
So far, we won’t see any change in the counter representation. The counters are counting but not showing, so to output the counter’s result we use the counter() and counters() functions. Yes, those are two functions with similar names but important differences.
The counter() function takes the name of a counter and outputs its content as a string. If many active counters have the same name, it will select the one that is defined closest to the element, so we can only output one counter at a time.
As mentioned earlier, we can set an element’s display to list-item to work with its ::marker pseudo-element:
h2 {
display: list-item;
}
Then, we can use counter() in its content property to output the current count. This allows us to prefix and suffix the counter by writing a string before or after the counter() function:
We would need to initiate individual counters and write different counter() functions for each level of nesting, and that’s only possible if we know how deep the nesting goes, which we simply don’t at times.
In this case, we use the counters() function, which also takes the name of a counter as an argument but instead of just outputting its content, it will join all active counters with that name into a single string and output it. To do so, it takes a string as a second argument, usually something like a dot (".") or dash ("-") that will be used between counters to join them.
We can use counter-reset and counter-increment to initiate a counter for each ol element, while each li will increment its closest counter by 1:
ol {
counter-reset: item;
}
li {
counter-increment: item;
}
But this time, instead of using counter() (which would only display one counter per item), we will use counters() to join all active counters by a string (e.g. ".“) and output them at once:
li::marker {
content: counters(item, ".") ". ";
}
Styling Counters
Both the counter() and counters() functions accept one additional, yet optional, last argument representing the counter style, the same ones we use in the list-style-type property. So in our last two examples, we could change the counter styles to Roman numbers and alphabetic letters, respectively:
It’s possible to count backward using custom counters, but we need to know beforehand the number of elements we’ll count. So for example, if we want to make a Top Five list in reversed order:
<h1>Best rated animation movies</h1>
<ol>
<li>Toy Story 2</li>
<li>Toy Story 1</li>
<li>Finding Nemo</li>
<li>How to Train your Dragon</li>
<li>Inside Out</li>
</ol>
We have to initiate our counter at the total number of elements plus one (so it doesn’t end at 0):
ol {
counter-reset: movies 6;
}
And then set the increment to a negative integer:
li {
counter-increment: movies -1;
}
To output the count we use counter() as we did before:
li::marker {
content: counter(movies) ". ";
}
There is also a way to write reversed counters supported in Firefox, but it hasn’t shipped to any other browser. Using the reversed() functional notation, we can wrap the counter name while initiating it to say it should be reversed.
ol {
counter-reset: reversed(movies);
}
li {
counter-increment: movies;
}
li::marker {
content: counter(movies) " .";
}
Styling Custom Counters
The last section was all about custom counters: we changed from where they started and how they increased, but at the end of the day, their output was styled in one of the browser’s built-in counter styles, usually decimal. Now using @counter-style, we’ll build our own counter styles to style any list.
The @counter-style at-rule, as its name implies, lets you create custom counter styles. After writing the at-rule it takes a custom ident as a name:
@counter-style my-counter-style {
/* etc. */
}
That name can be used inside the properties and functions that take a counter style, such as list-style-type or the last argument in counter() and counters():
fixed will write the characters in symbols descriptor just one time. In the last example, only the first two items will have a custom counter if set to fixed, while the others will drop to their fallback, which is decimal by default.
We can set when the custom counters start by appending an <integer> to the fixed value. For example, the following custom counter will start at the fourth item:
numeric will numerate list items using a custom positional system (base-2, base-8, base-16, etc.). Positional systems start at 0, so the first character at symbols will be used as 0, the next as 1, and so on. Knowing this, we can make an ordered list using non-decimal numerical systems like hexadecimal:
alphabetic will enumerate the list items using a custom alphabetical system. It’s similar to the numeric system but with the key difference that it doesn’t have a character for 0, so the next digits are just repeated. For example, if our symbols are "A" "B" "C" they will wrap to "AA", "AB", "AC", then BA, BB, BC and so on.
Since there is no equivalent for 0 and negative values, they will drop down to their fallback.
symbolic will go through the characters in symbols repeating them one more time each iteration. So for example, if our symbols are "A", "B", "C", it will go “A”, “B”, and “C”, double them in the next iteration as “AA”, “BB”, and “CC”, then triple them as “AAA”, “BBB”, “CCC” and so on.
Since there is no equivalent for 0 and negative values, they will drop down to their fallback.
additive will give characters a numerical value and add them together to get the counter representation. You can think of it as the way we usually count bills: if we have only $5, $2, and $1 bills, we will add them together to get the desired quantity, trying to keep the number of bills used at a minimum. So to represent 10, we will use two $5 bills instead of ten $1 bills.
Since there is no equivalent for negative values, they will drop down to their fallback.
Notice how we use additive-symbols when the system is additive, while we use just symbols for the previous systems.
extends will create a custom style from another one but with modifications. To do so, it takes a <counter-style-name> after the extends value. For example, we could change the decimal counter style default’s suffix to a closing parenthesis (")")`:
Per spec, “If a @counter-style uses the extends system, it must not contain a symbols or additive-symbols descriptor, or else the @counter-style rule is invalid.”
The other descriptors
The negative descriptor allows us to create a custom representation for a list’s negative values. It can take one or two characters: The first one is prepended to the counter, and by default it’s the hyphen-minus ("-"). The second one is appended to the symbol. For example, we could enclose negative representations into parenthesis (2), (1), 0, 1, 2:
The prefix and suffix descriptors allow us to prepend and append, respectively, a character to the counter representation. We can use it to add a character at the beginning of each counter using prefix:
The range descriptor defines an inclusive range in which the counter style is used. We can define a bounded range by writing one <integer> next to another. For example, a range of 2 4 will affect elements 2, 3, and 4:
On the other hand, using the infinite value we can unbound the range to one side. For example, we could write infinite 3 so all items up to 3 have a counter style:
The pad descriptor takes an <integer> that represents the minimum width for the counter and a character to pad it. For example, a zero-padded counter style would look like the following:
The fallback descriptor allows you to define which counter style should be used as a fallback whenever we can’t represent a specific count. For example, the following counter style is fixed and will fallback to lower-roman after the sixth item:
The symbols() function defines an only-use counter style without the need to do a whole @counter-style, but at the cost of missing some features. It can be used inside the list-style-type property and the counter() and counters() functions.
ol {
list-style-type: symbols(cyclic "🥬");
}
However, its browser support is appalling since it’s only supported in Firefox.
Images in Counters
In theory, there are four ways to add images to lists:
list-style-image property
content property
symbols descriptor in @counter-style
symbols() function.
In practice, the only supported ways are using list-style-image and content, since support for images in @counter-style and support in general for symbols()isn’t the best (it’s pretty bad).
list-style-image
The list-style-image can take an image or a gradient. In this case, we want to focus on images but gradients can also be used to create custom square bullets:
Sadly, changing the shape would require styling more the ::marker and this isn’t currently possible.
To use an image, we pass its url(), make sure is small enough to work as a counter:
li {
list-style-image: url("./logo.svg");
}
bread
milk
butter
apples
content
The content property works similar to list-style-image: we pass the image’s url() and provide a little padding on the left as an empty string:
li::marker {
content: url("./logo.svg") " ";
}
Spacing Things Out
You may notice in the last part how the image — depending on its size — isn’t completely centered on the text, and also that we provide an empty string on content properties for spacing instead of giving things either a padding or margin. Well, there’s an explanation for all of this, as since spacing is one of the biggest pain points when it comes to styling lists.
Margins and paddings are wacky
Spacing the ::marker from the list item should be as easy as increasing the marker’s or list margin, but in reality, it takes a lot more work.
First, the padding and margin properties aren’t allowed in ::marker. While lists have two types of elements: the list wrapper (usually ol or ul) and the list item (li), each with a default padding and margin. Which should we use?
You’ll notice how the only property that affects the spacing within ::marker and the text is the li item’s padding property, while the rest of the spacing properties will move the entire list item. Another thing to note is even when the padding is set to 0px, there is a space after the ::marker. This is set by browsers and will vary depending on which browser you’re using.
list-style-position
One last thing you may notice in the demo is a checkbox for the list-style-position property, and how once you set it to inside, the ::marker will move to the inside of the box, at the cost of removing any spacing given by the list item’s padding.
By default, markers are rendered outside the ul element’s box. A lot of times, this isn’t the best behavior: markers sneak out of elements, text-align won’t align the marker, and paradoxically, centered lists with flex or grid won’t look completely centered since the markers are outside the box.
To change this we can use the list-style-position property, it can be either outside (default) and inside to define where to position the list marker: either outside or the outside of the ul box.
Appending a space to content feels more like a workaround than the optimal solution.
And I completely agree that’s true, but just using ::marker there isn’t a correct way to add spacing between the ::marker and the list text, especially since most people prefer to set list-style-position to inside. So, as much as it pains me to say it, the simplest way to increase the gap after the marker is to suffix the content property with an empty string:
li::marker {
content: "• ";
}
bread
milk
butter
apples
BUT! This is only if we want to be purists and stick with the ::marker pseudo-element because, in reality, there is a much better way to position that marker: not using it at all.
Just use ::before
There is a reason people love using the ::before more than ::marker. First, we can’t use something like CSS Grid or Flexbox since changing the display of li to something other than list-item will remove the ::marker, and we can set the ::marker‘s height or width properties to better align it.
Let’s be real, ::marker works fine when we just want simple styling. But we are not here for simple styling! Once we want something more involved, ::marker will fall short and we’ll have to use the ::before pseudo-element.
Using ::before means we can use Flexbox, which allows for two things we couldn’t do before:
Vertically center the marker with the text
Easily increase the gap after the marker
Both can be achieved with Flexbox:
li {
display: flex;
align-items: center; /* Vertically center the marker */
gap: 20px; /* Increases the gap */
list-style-type: none;
}
The original ::marker is removed by changing the display.
Accesibility
In a previous section we turned things that weren’t lists into seemingly looking lists, so the question arises: should we actually do that? Doesn’t it hurt accessibility to make something look like a list when it isn’t one? As always, it depends. For a visual user, all the examples in this entry look all right, but for assistive technology users, some examples lack the necessary markup for accessible navigation.
Take for example our initial demo. Here, listing titles serves as decoration since the markup structure is given by the titles themselves. It’s the same deal for the counting siblings demo from earlier, as assistive technology users can read the document through the title structure.
However, this is the exception rather than the norm. That means a couple of the examples we looked at would fail if we need the list to be announced as a list in assistive technology, like screen readers. For example this list we looked at earlier:
How CSS relates to web performance is a funny dance. Some aspects are entirely negligible the vast majority of time. Some aspects are incredibly impactful and crucial to consider.
For example, whenever I see research into the performance of some form of CSS syntax, the results always seem to be meh, it’s fine. It can matter, but typically only with fairly extreme DOM weight situations, and spending time optimizing selectors is almost certainly wasted time. I do like that the browser powers that be think and care about this though, like Bramus here measuring the performance of @property for CSS Custom Property performance. In the end, it doesn’t matter much, which is an answer I hope they knew before it shipped everywhere (they almost certainly did). Issues with CSS syntax tend to be about confusion or error-prone situations, not speed.
But even though the syntax of CSS isn’t particularly worrisome for performance, the weight of it generally does matter. It’s important to remember that CSS that is a regular <link> in the <head> is render blocking, so until it’s downloaded and parsed, the website will not be displayed. Ship, say, 1.5MB of CSS, and the site’s performance will absolutely suffer for absolutely everyone. JavaScript is a worse offender on the web when it comes to size and resources, generally, but at least it’s loading is generally deferred.
The times when CSS performance tends to rear it’s head are in extreme DOM weight situations. Like a web page that renders all of Moby Dick, or every single Unicode character, or 10,000 product images, or a million screenshots, or whatever. That way a box-shadow just has a crazy amount of work to do. But even then, while CSS can be the cause of pain, it can be the solution as well. The content-visibility property in CSS can inform the browser to chill out on rendering more than it needs to up front. It’s not the more intuitive feature to use, but it’s nice we have these tools when we need them.
Scott Jehl released a course called Web Components Demystified. I love that name because it says what the course is about right on the tin: you’re going to learn about web components and clear up any confusion you may already have about them.
And there’s plenty of confusion to go around! “Components” is already a loaded term that’s come to mean everything from a piece of UI, like a search component, to an element you can drop in and reuse anywhere, such as a React component. The web is chock-full of components, tell you what.
But what we’re talking about here is a set of standards where HTML, CSS, and JavaScript rally together so that we can create custom elements that behave exactly how we want them to. It’s how we can make an element called <tasty-pizza> and the browser knows what to do with it.
This is my full set of notes from Scott’s course. I wouldn’t say they’re complete or even a direct one-to-one replacement for watching the course. You’ll still want to do that on your own, and I encourage you to because Scott is an excellent teacher who makes all of this stuff extremely accessible, even to noobs like me.
Chapter 1: What Web Components Are… and Aren’t
Web components are not built-in elements, even though that’s what they might look like at first glance. Rather, they are a set of technologies that allow us to instruct what the element is and how it behaves. Think of it the same way that “responsive web design” is not a thing but rather a set of strategies for adapting design to different web contexts. So, just as responsive web design is a set of ingredients — including media fluid grids, flexible images, and media queries — web components are a concoction involving:
Custom elements
These are HTML elements that are not built into the browser. We make them up. They include a letter and a dash.
The Shadow DOM is a fragment of the DOM where markup, scripts, and styles are encapsulated from other DOM elements. We’ll cover this in the fourth module, including how to <slot> content.
There used to be a fourth “ingredient” called HTML Imports, but those have been nixed.
In short, web components might be called “components” but they aren’t really components more than technologies. In React, components sort of work like partials. It defines a snippet of HTML that you drop into your code and it outputs in the DOM. Web Components are built off of HTML Elements. They are not replaced when rendered the way they are in JavaScript component frameworks. Web components are quite literally HTML elements and have to obey HTML rules. For example:
We’re generating meaningful HTML up-front rather than rendering it in the browser through the client after the fact. Provide the markup and enhance it! Web components have been around a while now, even if it seems we’re only starting to talk about them now.
Chapter 2: Custom Elements
First off, custom elements are not built-in HTML elements. We instruct what they are and how they behave. They are named with a dash and at must contain least one letter. All of the following are valid names for custom elements:
<super-component>
<a->
<a-4->
<card-10.0.1>
<card-♠️>
Just remember that there are some reserved names for MathML and SVG elements, like <font-face>. Also, they cannot be void elements, e.g. <my-element />, meaning they have to have a correspoonding closing tag.
Since custom elements are not built-in elements, they are undefined by default — and being undefined can be a useful thing! That means we can use them as containers with default properties. For example, they are display: inline by default and inherit the current font-family, which can be useful to pass down to the contents. We can also use them as styling hooks since they can be selected in CSS. Or maybe they can be used for accessibility hints. The bottom line is that they do not require JavaScript in order to make them immediately useful.
Working with JavaScript. If there is one <my-button> on the page, we can query it and set a click handler on it with an event listener. But if we were to insert more instances on the page later, we would need to query it when it’s appended and re-run the function since it is not part of the original document rendering.
Defining a custom element
This defines and registers the custom element. It teaches the browser that this is an instance of the Custom Elements API and extends the same class that makes other HTML elements valid HTML elements:
<my-element>My Element</my-element>
<script>
customElements.define("my-element", class extends HTMLElement {});
</script>
Check out the methods we get immediate access to:
Breaking down the syntax
customElements
.define(
"my-element",
class extends HTMLElement {}
);
// Functionally the same as:
class MyElement extends HTMLElement {}
customElements.define("my-element", MyElement);
export default myElement
// ...which makes it importable by other elements:
import MyElement from './MyElement.js';
const myElement = new MyElement();
document.body.appendChild(myElement);
// <body>
// <my-element></my-element>
// </body>
// Or simply pull it into a page
// Don't need to `export default` but it doesn't hurt to leave it
// <my-element>My Element</my-element>
// <script type="module" src="my-element.js"></script>
It’s possible to define a custom element by extending a specific HTML element. The specification documents this, but Scott is focusing on the primary way.
class WordCount extends HTMLParagraphElement
customElements.define("word-count", WordCount, { extends: "p" });
// <p is="word-count">This is a custom paragraph!</p>
Scott says do not use this because WebKit is not going to implement it. We would have to polyfill it forever, or as long as WebKit holds out. Consider it a dead end.
The lifecycle
A component has various moments in its “life” span:
Constructed (constructor)
Connected (connectedCallback)
Adopted (adoptedCallback)
Attribute Changed (attributeChangedCallback)
Disconnected (disconnectedCallback)
We can hook into these to define the element’s behavior.
class myElement extends HTMLElement {
constructor() {
// provides us with the `this` keyword
super()
// add a property
this.someProperty = "Some value goes here";
// add event listener
this.addEventListener("click", () => {});
}
}
customElements.define("my-element", MyElement);
“When the constructor is called, do this…” We don’t have to have a constructor when working with custom elements, but if we do, then we need to call super() because we’re extending another class and we’ll get all of those properties.
Constructor is useful, but not for a lot of things. It’s useful for setting up initial state, registering default properties, adding event listeners, and even creating Shadow DOM (which Scott will get into in a later module). For example, we are unable to sniff out whether or not the custom element is in another element because we don’t know anything about its parent container yet (that’s where other lifecycle methods come into play) — we’ve merely defined it.
connectedCallback()
class myElement extends HTMLElement {
// the constructor is unnecessary in this example but doesn't hurt.
constructor() {
super()
}
// let me know when my element has been found on the page.
connectedCallback() {
console.log(`${this.nodeName} was added to the page.`);
}
}
customElements.define("my-element", MyElement);
Note that there is some strangeness when it comes to timing things. Sometimes isConnected returns true during the constructor. connectedCallback() is our best way to know when the component is found on the page. This is the moment it is connected to the DOM. Use it to attach event listeners.
If the <script> tag comes before the DOM is parsed, then it might not recognize childNodes. This is not an uncommon situation. But if we add type="module" to the <script>, then the script is deferred and we get the child nodes. Using setTimeout can also work, but it looks a little gross.
disconnectedCallback
class myElement extends HTMLElement {
// let me know when my element has been found on the page.
disconnectedCallback() {
console.log(`${this.nodeName} was removed from the page.`);
}
}
customElements.define("my-element", MyElement);
This is useful when the component needs to be cleaned up, perhaps like stopping an animation or preventing memory links.
adoptedCallback()
This is when the component is adopted by another document or page. Say you have some iframes on a page and move a custom element from the page into an iframe, then it would be adopted in that scenario. It would be created, then added, then removed, then adopted, then added again. That’s a full lifecycle! This callback is adopted automatically simply by picking it up and dragging it between documents in the DOM.
Custom elements and attributes
Unlike React, HTML attributes are strings (not props!). Global attributes work as you’d expect, though some global attributes are reflected as properties. You can make any attribute do that if you want, just be sure to use care and caution when naming because, well, we don’t want any conflicts.
Avoid standard attributes on a custom element as well, as that can be confusing particularly when handing a component to another developer. Example: using type as an attribute which is also used by <input> elements. We could say data-type instead. (Remember that Chris has a comprehensive guide on using data attributes.)
Examples
Here’s a quick example showing how to get a greeting attribute and set it on the custom element:
class MyElement extends HTMLElement {
get greeting() {
return this.getAttribute('greeting');
// return this.hasAttribute('greeting');
}
set greeting(val) {
if(val) {
this.setAttribute('greeting', val);
// this setAttribute('greeting', '');
} else {
this.removeAttribute('greeting');
}
}
}
customElements.define("my-element", MyElement);
Another example, this time showing a callback for when the attribute has changed, which prints it in the element’s contents:
customElements.get('my-element');
// returns MyElement Class
customElements.getName(MyElement);
// returns 'my-element'
customElements.whenDefined("my-element");
// waits for custom element to be defined
const el = document.createElement("spider-man");
class SpiderMan extends HTMLElement {
constructor() {
super();
console.log("constructor!!");
}
}
customElements.define("spider-man", SpiderMan);
customElements.upgrade(el);
// returns "constructor!!"
Bring your own base class, in the same way web components frameworks like Lit do:
class BaseElement extends HTMLElement {
$ = this.querySelector;
}
// extend the base, use its helper
class myElement extends BaseElement {
firstLi = this.$("li");
}
Practice prompt
Create a custom HTML element called <say-hi> that displays the text “Hi, World!” when added to the page:
Enhance the element to accept a name attribute, displaying "Hi, [Name]!" instead:
Chapter 3: HTML Templates
The <template> element is not for users but developers. It is not exposed visibly by browsers.
<template>The browser ignores everything in here.</template>
A template is selectable in CSS; it just doesn’t render. It’s a document fragment. The inner document is a #document-fragment. Not sure why you’d do this, but it illustrates the point that templates are selectable:
template { display: block; }` /* Nope */
template + div { height: 100px; width: 100px; } /* Works */
The content property
No, not in CSS, but JavaScript. We can query the inner contents of a template and print them somewhere else.
const myFrag = document.createDocumentFragment();
myFrag.innerHTML = "<p>Test</p>"; // Nope
const myP = document.createElement("p"); // Yep
myP.textContent = "Hi!";
myFrag.append(myP);
// use the fragment
document.body.append(myFrag);
Clone a node
<template>
<p>Hi</p>
</template>
<script>
const myTmpl = documenty.querySelector("template").content;
console.log(myTmpl);
// Oops, only works one time! We need to clone it.
</script>
Oops, the component only works one time! We need to clone it if we want multiple instances:
The other way to use templates that we’ll get to in the next module: Shadow DOM
<template shadowroot=open>
<p>Hi, I'm in the Shadow DOM</p>
</template>
Chapter 4: Shadow DOM
Here we go, this is a heady chapter! The Shadow DOM reminds me of playing bass in a band: it’s easy to understand but incredibly difficult to master. It’s easy to understand that there are these nodes in the DOM that are encapsulated from everything else. They’re there, we just can’t really touch them with regular CSS and JavaScript without some finagling. It’s the finagling that’s difficult to master. There are times when the Shadow DOM is going to be your best friend because it prevents outside styles and scripts from leaking in and mucking things up. Then again, you’re most certainly going go want to style or apply scripts to those nodes and you have to figure that part out.
That’s where web components really shine. We get the benefits of an element that’s encapsulated from outside noise but we’re left with the responsibility of defining everything for it ourselves.
Select elements are a great example of the Shadow DOM. Shadow roots! Slots! They’re all part of the puzzle.
Using the Shadow DOM
We covered the <template> element in the last chapter and determined that it renders in the Shadow DOM without getting displayed on the page.
<template shadowrootmode="closed">
<p>This will render in the Shadow DOM.</p>
</template>
In this case, the <template> is rendered as a #shadow-root without the <template> element’s tags. It’s a fragment of code. So, while the paragraph inside the template is rendered, the <template> itself is not. It effectively marks the Shadow DOM’s boundaries. If we were to omit the shadowrootmode attribute, then we simply get an unrendered template. Either way, though, the paragraph is there in the DOM and it is encapsulated from other styles and scripts on the page.
These are all of the elements that can have a shadow.
Breaching the shadow
There are times you’re going to want to “pierce” the Shadow DOM to allow for some styling and scripts. The content is relatively protected but we can open the shadowrootmode and allow some access.
<div>
<template shadowrootmode="open">
<p>This will render in the Shadow DOM.</p>
</template>
</div>
Now we can query the div that contains the <template> and select the #shadow-root:
document.querySelector("div").shadowRoot
// #shadow-root (open)
// <p>This will render in the Shadow DOM.</p>
We need that <div> in there so we have something to query in the DOM to get to the paragraph. Remember, the <template> is not actually rendered at all.
Additional shadow attributes
<!-- should this root stay with a parent clone? -->
<template shadowrootcloneable>
<!-- allow shadow to be serialized into a string object — can forget about this -->
<template shadowrootserializable>
<!-- click in element focuses first focusable element -->
<template shadowrootdelegatesfocus>
Shadow DOM siblings
When you add a shadow root, it becomes the only rendered root in that shadow host. Any elements after a shadow root node in the DOM simply don’t render. If a DOM element contains more than one shadow root node, the ones after the first just become template tags. It’s sort of like the Shadow DOM is a monster that eats the siblings.
Slots bring those siblings back!
<div>
<template shadowroot="closed">
<slot></slot>
<p>I'm a sibling of a shadow root, and I am visible.</p>
</template>
</div>
All of the siblings go through the slots and are distributed that way. It’s sort of like slots allow us to open the monster’s mouth and see what’s inside.
Declaring the Shadow DOM
Using templates is the declarative way to define the Shadow DOM. We can also define the Shadow DOM imperatively using JavaScript. So, this is doing the exact same thing as the last code snippet, only it’s done programmatically in JavaScript:
<my-element>
<template shadowroot="open">
<p>This will render in the Shadow DOM.</p>
</template>
</my-element>
<script>
customElements.define('my-element', class extends HTMLElement {
constructor() {
super();
// attaches a shadow root node
this.attachShadow({mode: "open"});
// inserts a slot into the template
this.shadowRoot.innerHTML = '<slot></slot>';
}
});
</script>
So, is it better to be declarative or imperative? Like the weather where I live, it just depends.
Both approaches have their benefits.
We can set the shadow mode via Javascript as well:
// open
this.attachShadow({mode: open});
// closed
this.attachShadow({mode: closed});
// cloneable
this.attachShadow({cloneable: true});
// delegateFocus
this.attachShadow({delegatesFocus: true});
// serialized
this.attachShadow({serializable: true});
// Manually assign an element to a slot
this.attachShadow({slotAssignment: "manual"});
About that last one, it says we have to manually insert the <slot> elements in JavaScript:
<my-element>
<p>This WILL render in shadow DOM but not automatically.</p>
</my-element>
<script>
customElements.define('my-element', class extends HTMLElement {
constructor() {
super();
this.attachShadow({
mode: "open",
slotAssignment: "manual"
});
this.shadowRoot.innerHTML = '<slot></slot>';
}
connectedCallback(){
const slotElem = this.querySelector('p');
this.shadowRoot.querySelector('slot').assign(slotElem);
}
});
</script>
Examples
Scott spent a great deal of time sharing examples that demonstrate different sorts of things you might want to do with the Shadow DOM when working with web components. I’ll rapid-fire those in here.
Get an array of element nodes in a slot
this.shadowRoot.querySelector('slot')
.assignedElements();
// get an array of all nodes in a slot, text too
this.shadowRoot.querySelector('slot')
.assignedNodes();
Long story, cut short: maybe don’t create custom form controls as web components. We get a lot of free features and functionalities — including accessibility — with native form controls that we have to recreate from scratch if we decide to roll our own.
In the case of forms, one of the oddities of encapsulation is that form submissions are not automatically connected. Let’s look at a broken form that contains a web component for a custom input:
<form>
<my-input>
<template shadowrootmode="open">
<label>
<slot></slot>
<input type="text" name="your-name">
</label>
</template>
Type your name!
</my-input>
<label><input type="checkbox" name="remember">Remember Me</label>
<button>Submit</button>
</form>
<script>
document.forms[0].addEventListener('input', function(){
let data = new FormData(this);
console.log(new URLSearchParams(data).toString());
});
</script>
This input’s value won’t be in the submission! Also, form validation and states are not communicated in the Shadow DOM. Similar connectivity issues with accessibility, where the shadow boundary can interfere with ARIA. For example, IDs are local to the Shadow DOM. Consider how much you really need the Shadow DOM when working with forms.
Element internals
The moral of the last section is to tread carefully when creating your own web components for form controls. Scott suggests avoiding that altogether, but he continued to demonstrate how we could theoretically fix functional and accessibility issues using element internals.
Let’s start with an input value that will be included in the form submission.
Phew, that’s a lot of work! And sure, this gets us a lot closer to a more functional and accessible custom form input, but there’s still a long way’s to go to achieve what we already get for free from using native form controls. Always question whether you can rely on a light DOM form instead.
Chapter 5: Styling Web Components
Styling web components comes in levels of complexity. For example, we don’t need any JavaScript at all to slap a few styles on a custom element.
<my-element theme="suave" class="priority">
<h1>I'm in the Light DOM!</h1>
</my-element>
<style>
/* Element, class, attribute, and complex selectors all work. */
my-element {
display: block; /* custom elements are inline by default */
}
.my-element[theme=suave] {
color: #fff;
}
.my-element.priority {
background: purple;
}
.my-element h1 {
font-size: 3rem;
}
</style>
This is not encapsulated! This is scoped off of a single element just light any other CSS in the Light DOM.
Changing the Shadow DOM mode from closed to open doesn’t change CSS. It allows JavaScript to pierce the Shadow DOM but CSS isn’t affected.
The first and third paragraphs are still receiving the red color from the Light DOM’s CSS.
The <style> declarations in the <template> are encapsulated and do not leak out to the other paragraphs, even though it is declared later in the cascade.
Everything is red! This isn’t a bug. Inheritable styles do pass through the Shadow DOM barrier.
Inherited styles are those that are set by the computed values of their parent styles. Many properties are inheritable, including color. The <body> is the parent and everything in it is a child that inherits these styles, including custom elements.
Let’s fight with inheritance
We can target the paragraph in the <template> style block to override the styles set on the <body>. Those won’t leak back to the other paragraphs.
<style>
body {
color: red;
font-family: fantasy;
font-size: 2em;
}
</style>
<p>Hi</p>
<div>
<template shadowrootmode="open">
<style>
/* reset the light dom styles */
p {
color: initial;
font-family: initial;
font-size: initial;
}
</style>
<p>Hi</p>
</template>
</div>
<p>Hi</p>
This is protected, but the problem here is that it’s still possible for a new role or property to be introduced that passes along inherited styles that we haven’t thought to reset.
Perhaps we could use all: initital as a defensive strategy against future inheritable styles. But what if we add more elements to the custom element? It’s a constant fight.
Host styles!
We can scope things to the shadow root’s :host selector to keep things protected.
This breaks the custom element’s styles. But that’s because Shadow DOM styles are applied before Light DOM styles. The styles scoped to the universal selector are simply applied after the :host styles, which overrides what we have in the shadow root. So, we’re still locked in a brutal fight over inheritance and need stronger specificity.
According to Scott, !important is one of the only ways we have to apply brute force to protect our custom elements from outside styles leaking in. The keyword gets a bad rap — and rightfully so in the vast majority of cases — but this is a case where it works well and using it is an encouraged practice. It’s not like it has an impact on the styles outside the custom element, anyway.
There are some useful selectors we have to look at components from the outside, looking in.
:host()
We just looked at this! But note how it is a function in addition to being a pseudo-selector. It’s sort of a parent selector in the sense that we can pass in the <div> that contains the <template> and that becomes the scoping context for the entire selector, meaning the !important keyword is no longer needed.
This targets the shadow host but only if the provided selector is a parent node anywhere up the tree. This is super helpful for styling custom elements where the layout context might change, say, from being contained in an <article> versus being contained in a <header>.
:defined
Defining an element occurs when it is created, and this pseudo-selector is how we can select the element in that initially-defined state. I imagine this is mostly useful for when a custom element is defined imperatively in JavaScript so that we can target the very moment that the element is constructed, and then set styles right then and there.
Minor note about protecting against a flash of unstyled content (FOUC)… or unstyled element in this case. Some elements are effectively useless until JavsScript has interacted with it to generate content. For example, an empty custom element that only becomes meaningful once JavaScript runs and generates content. Here’s how we can prevent the inevitable flash that happens after the content is generated:
Warning zone! It’s best for elements that are empty and not yet defined. If you’re working with a meaningful element up-front, then it’s best to style as much as you can up-front.
Styling slots
This does not style the paragraph green as you might expect:
The Shadow DOM cannot style this content directly. The styles would apply to a paragraph in the <template> that gets rendered in the Light DOM, but it cannot style it when it is slotted into the <template>.
This means that slots are easier to target when it comes to piercing the shadow root with styles, making them a great method of progressive style enhancement.
We have another special selected, the ::slotted() pseudo-element that’s also a function. We pass it an element or class and that allows us to select elements from within the shadow root.
Unfortunately, ::slotted() is a weak selected when compared to global selectors. So, if we were to make this a little more complicated by introducing an outside inheritable style, then we’d be hosed again.
This is another place where !important could make sense. It even wins if the global style is also set to !important. We could get more defensive and pass the universal selector to ::slotted and set everything back to its initial value so that all slotted content is encapsulated from outside styles leaking in.
<style>
/* global paragraph style... */
p { color: green; }
</style>
<div>
<template shadowrootmode="open">
<style>
/* ...can't override this important statement */
::slotted(*) { all: initial !important; }
</style>
<slot></slot>
</template>
<p>Slotted Element</p>
</div>
Styling :parts
A part is a way of offering up Shadow DOM elements to the parent document for styling. Let’s add a part to a custom element:
Without the part attribute, there is no way to write styles that reach the paragraph. But with it, the part is exposed as something that can be styled.
We can use this to expose specific “parts” of the custom element that are open to outside styling, which is almost like establishing a styling API with specifications for what can and can’t be styled. Just note that ::part cannot be used as part of a complex selector, like a descendant selector:
A bit in the weeds here, but we can export parts in the sense that we can nest elements within elements within elements, and so on. This way, we include parts within elements.
<my-component>
<!-- exposes three parts to the nested component -->
<nested-component exportparts="part1, part2, part5"></nested-component>
</my-component>
Styling states and validity
We discussed this when going over element internals in the chapter about the Shadow DOM. But it’s worth revisiting that now that we’re specifically talking about styling. We have a :state pseudo-function that accepts our defined states.
There’s the classic ol’ external <link> way of going about it:
<simple-custom>
<template shadowrootmode="open">
<link rel="stylesheet" href="../../assets/external.css">
<p>This one's in the shadow Dom.</p>
<slot></slot>
</template>
<p>Slotted <b>Element</b></p>
</simple-custom>
It might seem like an anti-DRY approach to call the same external stylesheet at the top of all web components. To be clear, yes, it is repetitive — but only as far as writing it. Once the sheet has been downloaded once, it is available across the board without any additional requests, so we’re still technically dry in the sense of performance.
We have a JavaScript module and import CSS into a string that is then adopted by the shadow root using shadowRoort.adoptedStyleSheets . And since adopted stylesheets are dynamic, we can construct one, share it across multiple instances, and update styles via the CSSOM that ripple across the board to all components that adopt it.
Container queries!
Container queries are nice to pair with components, as custom elements and web components are containers and we can query them and adjust things as the container changes.
In this example, we’re setting styles on the :host() to define a new container, as well as some general styles that are protected and scoped to the shadow root. From there, we introduce a container query that updates the unordered list’s layout when the custom element is at least 50em wide.
Next up…
How web component features are used together!
Chapter 6: HTML-First Patterns
In this chapter, Scott focuses on how other people are using web components in the wild and highlights a few of the more interesting and smart patterns he’s seen.
Let’s start with a typical counter
It’s often the very first example used in React tutorials.
Reef is a tiny library by Chris Ferdinandi that weighs just 2.6KB minified and zipped yet still provides DOM diffing for reactive state-based UIs like React, which weighs significantly more. An example of how it works in a standalone way:
<div id="greeting"></div>
<script type="module">
import {signal, component} from '.../reef.es..min.js';
// Create a signal
let data = signal({
greeting: 'Hello',
name: 'World'
});
component('#greeting', () => `<p>${data.greeting}, ${data.name}!</p>`);
</script>
This sets up a “signal” that is basically a live-update object, then calls the component() method to select where we want to make the update, and it injects a template literal in there that passes in the variables with the markup we want.
So, for example, we can update those values on setTimeout:
<div id="greeting"></div>
<script type="module">
import {signal, component} from '.../reef.es..min.js';
// Create a signal
let data = signal({
greeting: 'Hello',
name: 'World'
});
component('#greeting', () => `<p>${data.greeting}, ${data.name}!</p>`);
setTimeout(() => {
data.greeting = '¡Hola'
data,name = 'Scott'
}, 3000)
</script>
We can combine this sort of library with a web component. Here, Scott imports Reef and constructs the data outside the component so that it’s like the application state:
It’s the virtual DOM in a web component! Another approach that is more reactive in the sense that it watches for changes in attributes and then updates the application state in response which, in turn, updates the greeting.
If the attribute changes, it only changes that instance. The data is registered at the time the component is constructed and we’re only changing string attributes rather than objects with properties.
HTML Web Components
This describes web components that are not empty by default like this:
<my-greeting></my-greeting>
This is a “React” mindset where all the functionality, content, and behavior comes from JavaScript. But Scott reminds us that web components are pretty useful right out of the box without JavaScript. So, “HTML web components” refers to web components that are packed with meaningful content right out of the gate and Scott points to Jeremy Keith’s 2023 article coining the term.
[…] we could call them “HTML web components.” If your custom element is empty, it’s not an HTML web component. But if you’re using a custom element to extend existing markup, that’s an HTML web component.
Jeremy cites something Robin Rendle mused about the distinction:
[…] I’ve started to come around and see Web Components as filling in the blanks of what we can do with hypertext: they’re really just small, reusable chunks of code that extends the language of HTML.
The props look like HTML but they’re not. Instead, the props provide information used to completely swap out the <UserAvatar /> tag with the JavaScript-based markup.
This way, the image is downloaded and ready before JavaScript even loads on the page. Strive for augmentation over replacement!
resizeasaurus
This helps developers test responsive component layouts, particularly ones that use container queries.
<resize-asaurus>
Drop any HTML in here to test.
</resize-asaurus>
<!-- for example: -->
<resize-asaurus>
<div class="my-responsive-grid">
<div>Cell 1</div> <div>Cell 2</div> <div>Cell 3</div> <!-- ... -->
</div>
</resize-asaurus>
lite-youtube-embed
This is like embedding a YouTube video, but without bringing along all the baggage that YouTube packs into a typical embed snippet.
It starts with a link which is a nice fallback if the video fails to load for whatever reason. When the script runs, the HTML is augmented to include the video <iframe>.
Chapter 7: Web Components Frameworks Tour
Lit
Lit extends the base class and then extends what that class provides, but you’re still working directly on top of web components. There are syntax shortcuts for common patterns and a more structured approach.
This is part of the 11ty project. It allows you to define custom elements as files, writing everything as a single file component.
<!-- starting element / index.html -->
<my-element></my-element>
<!-- ../components/my-element.webc -->
<p>This is inside the element</p>
<style>
/* etc. */
</style>
<script>
// etc.
</script>
Pros
Cons
Community
Geared toward SSG
SSG progressive enhancement
Still in early stages
Single file component syntax
Zach Leatherman!
Enhance
This is Scott’s favorite! It renders web components on the server. Web components can render based on application state per request. It’s a way to use custom elements on the server side.
Pros
Cons
Ergonomics
Still in early stages
Progressive enhancement
Single file component syntax
Full-stack stateful, dynamic SSR components
Chapter 8: Web Components Libraries Tour
This is a super short module simply highlighting a few of the more notable libraries for web components that are offered by third parties. Scott is quick to note that all of them are closer in spirit to a React-based approach where custom elements are more like replaced elements with very little meaningful markup to display up-front. That’s not to throw shade at the libraries, but rather to call out that there’s a cost when we require JavaScript to render meaningful content.
Spectrum
<sp-button variant="accent" href="components/button">
Use Spectrum Web Component buttons
</sp-button>
One of the more ambitious projects, as it supports other frameworks like React
Open source
Built on Lit
Most components are not exactly HTML-first. The pattern is closer to replaced elements. There’s plenty of complexity, but that makes sense for a system that drives an application like Photoshop and is meant to drop into any project. But still, there is a cost when it comes to delivering meaningful content to users up-front. An all-or-nothing approach like this might be too stark for a small website project.
FAST
<fast-checkbox>Checkbox</fast-checkbox>
This is Microsoft’s system.
It’s philosophically like Spectrum where there’s very little meaningful HTML up-front.
Fluent is a library that extends the system for UI components.
Microsoft Edge rebuilt the browser’s Chrome using these components.
Shoelace
<sl-button>Click Me</sl-button>
Purely meant for third-party developers to use in their projects
The name is a play on Bootstrap. 🙂
The markup is mostly a custom element with some text in it rather than a pure HTML-first approach.
Acquired by Font Awesome and they are creating Web Awesome Components as a new era of Shoelace that is subscription-based
Chapter 9: What’s Next With Web Components
Scott covers what the future holds for web components as far as he is aware.
Declarative custom elements
Define an element in HTML alone that can be used time and again with a simpler syntax. There’s a GitHub issue that explains the idea, and Zach Leatherman has a great write-up as well.
How can we use container queries without needing an extra wrapper around the custom element?
HTML Modules
This was one of the web components’ core features but was removed at some point. They can define HTML in an external place that could be used over and over.
They also have their issues. Namely, static sites either are purely static or the frameworks that generate them completely lose out on true static generation when you just dip your toes in the direction of server routes.
Astro has been watching the front-end ecosystem and is trying to keep one foot firmly embedded in pure static generation, and the other in a powerful set of server-side functionality.
With Astro Actions, Astro brings a lot of the power of the server to a site that is almost entirely static. A good example of this sort of functionality is dealing with search. If you have a content-based site that can be purely generated, adding search is either going to be something handled entirely on the front end, via a software-as-a-service solution, or, in other frameworks, converting your entire site to a server-side application.
With Astro, we can generate most of our site during our build, but have a small bit of server-side code that can handle our search functionality using something like Fuse.js.
In this demo, we’ll use Fuse to search through a set of personal “bookmarks” that are generated at build time, but return proper results from a server call.
To get started, we’ll just set up a very basic Astro project. In your terminal, run the following command:
npm create astro@latest
Astro’s adorable mascot Houston is going to ask you a few questions in your terminal. Here are the basic responses, you’ll need:
Where should we create your new project? Wherever you’d like, but I’ll be calling my directory ./astro-search
How would you like to start your new project? Choose the basic minimalist starter.
Install dependencies? Yes, please!
Initialize a new git repository? I’d recommend it, personally!
This will create a directory in the location specified and install everything you need to start an Astro project. Open the directory in your code editor of choice and run npm run dev in your terminal in the directory.
When you run your project, you’ll see the default Astro project homepage.
We’re ready to get our project rolling!
Basic setup
To get started, let’s remove the default content from the homepage. Open the /src/pages/index.astro file.
This is a fairly barebones homepage, but we want it to be even more basic. Remove the <Welcome /> component, and we’ll have a nice blank page.
For styling, let’s add Tailwind and some very basic markup to the homepage to contain our site.
npx astro add tailwind
The astro add command will install Tailwind and attempt to set up all the boilerplate code for you (handy!). The CLI will ask you if you want it to add the various components, I recommend letting it, but if anything fails, you can copy the code needed from each of the steps in the process. As the last step for getting to work with Tailwind, the CLI will tell you to import the styles into a shared layout. Follow those instructions, and we can get to work.
Let’s add some very basic markup to our new homepage.
---
// ./src/pages/index.astro
import Layout from '../layouts/Layout.astro';
---
<Layout>
<div class="max-w-3xl mx-auto my-10">
<h1 class="text-3xl text-center">My latest bookmarks</h1>
<p class="text-xl text-center mb-5">This is only 10 of A LARGE NUMBER THAT WE'LL CHANGE LATER</p>
</div>
</Layout>
Your site should now look like this.
Not exactly winning any awards yet! That’s alright. Let’s get our bookmarks loaded in.
Adding bookmark data with Astro Content Layer
Since not everyone runs their own application for bookmarking interesting items, you can borrow my data. Here’s a small subset of my bookmarks, or you can go get 110 items from this link on GitHub. Add this data as a file in your project. I like to group data in a data directory, so my file lives in /src/data/bookmarks.json.
Open code
[
{
"pageTitle": "Our Favorite Sandwich Bread | King Arthur Baking",
"url": "<https://www.kingarthurbaking.com/recipes/our-favorite-sandwich-bread-recipe>",
"description": "Classic American sandwich loaf, perfect for French toast and sandwiches.",
"id": "007y8pmEOvhwldfT3wx1MW"
},
{
"pageTitle": "Chris Coyier's discussion of Automatic Social Share Images | CSS-Tricks ",
"url": "<https://css-tricks.com/automatic-social-share-images/>",
"description": "It's a pretty low-effort thing to get a big fancy link preview on social media. Toss a handful of specific <meta> tags on a URL and you get a big image-title-description thing ",
"id": "04CXDvGQo19m0oXERL6bhF"
},
{
"pageTitle": "Automatic Social Share Images | ryanfiller.com",
"url": "<https://www.ryanfiller.com/blog/automatic-social-share-images/>",
"description": "Setting up automatic social share images with Puppeteer and Netlify Functions. ",
"id": "04CXDvGQo19m0oXERLoC10"
},
{
"pageTitle": "Emma Wedekind: Foundations of Design Systems / React Boston 2019 - YouTube",
"url": "<https://m.youtube.com/watch?v=pXb2jA43A6k>",
"description": "Emma Wedekind: Foundations of Design Systems / React Boston 2019 Presented by: Emma Wedekind – LogMeIn Design systems are in the world around us, from street...",
"id": "0d56d03e-aba4-4ebd-9db8-644bcc185e33"
},
{
"pageTitle": "Editorial Design Patterns With CSS Grid And Named Columns — Smashing Magazine",
"url": "<https://www.smashingmagazine.com/2019/10/editorial-design-patterns-css-grid-subgrid-naming/>",
"description": "By naming lines when setting up our CSS Grid layouts, we can tap into some interesting and useful features of Grid — features that become even more powerful when we introduce subgrids.",
"id": "13ac1043-1b7d-4a5b-a3d8-b6f5ec34cf1c"
},
{
"pageTitle": "Netlify pro tip: Using Split Testing to power private beta releases - DEV Community 👩💻👨💻",
"url": "<https://dev.to/philhawksworth/netlify-pro-tip-using-split-testing-to-power-private-beta-releases-a7l>",
"description": "Giving users ways to opt in and out of your private betas. Video and tutorial.",
"id": "1fbabbf9-2952-47f2-9005-25af90b0229e"
},
{
"pageTitle": "Netlify Public Folder, Part I: What? Recreating the Dropbox Public Folder With Netlify | Jim Nielsen’s Weblog",
"url": "<https://blog.jim-nielsen.com/2019/netlify-public-folder-part-i-what/>",
"id": "2607e651-7b64-4695-8af9-3b9b88d402d5"
},
{
"pageTitle": "Why Is CSS So Weird? - YouTube",
"url": "<https://m.youtube.com/watch?v=aHUtMbJw8iA&feature=youtu.be>",
"description": "Love it or hate it, CSS is weird! It doesn't work like most programming languages, and it doesn't work like a design tool either. But CSS is also solving a v...",
"id": "2e29aa3b-45b8-4ce4-85b7-fd8bc50daccd"
},
{
"pageTitle": "Internet world despairs as non-profit .org sold for $$$$ to private equity firm, price caps axed • The Register",
"url": "<https://www.theregister.co.uk/2019/11/20/org_registry_sale_shambles/>",
"id": "33406b33-c453-44d3-8b18-2d2ae83ee73f"
},
{
"pageTitle": "Netlify Identity for paid subscriptions - Access Control / Identity - Netlify Community",
"url": "<https://community.netlify.com/t/netlify-identity-for-paid-subscriptions/1947/2>",
"description": "I want to limit certain functionality on my website to paying users. Now I’m using a payment provider (Mollie) similar to Stripe. My idea was to use the webhook fired by this service to call a Netlify function and give…",
"id": "34d6341c-18eb-4744-88e1-cfbf6c1cfa6c"
},
{
"pageTitle": "SmashingConf Freiburg 2019: Videos And Photos — Smashing Magazine",
"url": "<https://www.smashingmagazine.com/2019/10/smashingconf-freiburg-2019/>",
"description": "We had a lovely time at SmashingConf Freiburg. This post wraps up the event and also shares the video of all of the Freiburg presentations.",
"id": "354cbb34-b24a-47f1-8973-8553ed1d809d"
},
{
"pageTitle": "Adding Google Calendar to your JAMStack",
"url": "<https://www.raymondcamden.com/2019/11/18/adding-google-calendar-to-your-jamstack>",
"description": "A look at using Google APIs to add events to your static site.",
"id": "361b20c4-75ce-46b3-b6d9-38139e03f2ca"
},
{
"pageTitle": "How to Contribute to an Open Source Project | CSS-Tricks",
"url": "<https://css-tricks.com/how-to-contribute-to-an-open-source-project/>",
"description": "The following is going to get slightly opinionated and aims to guide someone on their journey into open source. As a prerequisite, you should have basic",
"id": "37300606-af08-4d9a-b5e3-12f64ebbb505"
},
{
"pageTitle": "Functions | Netlify",
"url": "<https://www.netlify.com/docs/functions/>",
"description": "Netlify builds, deploys, and hosts your front end. Learn how to get started, see examples, and view documentation for the modern web platform.",
"id": "3bf9e31b-5288-4b3b-89f2-97034603dbf6"
},
{
"pageTitle": "Serverless Can Help You To Focus - By Simona Cotin",
"url": "<https://hackernoon.com/serverless-can-do-that-7nw32mk>",
"id": "43b1ee63-c2f8-4e14-8700-1e21c2e0a8b1"
},
{
"pageTitle": "Nuxt, Next, Nest?! My Head Hurts. - DEV Community 👩💻👨💻",
"url": "<https://dev.to/laurieontech/nuxt-next-nest-my-head-hurts-5h98>",
"description": "I clearly know what all of these things are. Their names are not at all similar. But let's review, just to make sure we know...",
"id": "456b7d6d-7efa-408a-9eca-0325d996b69c"
},
{
"pageTitle": "Consuming a headless CMS GraphQL API with Eleventy - Webstoemp",
"url": "<https://www.webstoemp.com/blog/headless-cms-graphql-api-eleventy/>",
"description": "With Eleventy, consuming data coming from a GraphQL API to generate static pages is as easy as using Markdown files.",
"id": "4606b168-21a6-49df-8536-a2a00750d659"
},
]
Now that the data is in the project, we need for Astro to incorporate the data into its build process. To do this, we can use Astro’s new(ish) Content Layer API. The Content Layer API adds a content configuration file to your src directory that allows you to run and collect any number of content pieces from data in your project or external APIs. Create the file /src/content.config.ts (the name of this file matters, as this is what Astro is looking for in your project).
In this file, we import a few helpers from Astro. We can use defineCollection to create the collection, z as Zod, to help define our types, and file is a specific content loader meant to read data files.
The defineCollection method takes an object as its argument with a required loader and optional schema. The schema will help make our content type-safe and make sure our data is always what we expect it to be. In this case, we’ll define the three data properties each of our bookmarks has. It’s important to define all your data in your schema, otherwise it won’t be available to your templates.
We provide the loader property with a content loader. In this case, we’ll use the file loader that Astro provides and give it the path to our JSON.
Finally, we need to export the collections variable as an object containing all the collections that we’ve defined (just bookmarks in our project). You’ll want to restart the local server by re-running npm run dev in your terminal to pick up the new data.
Using the new bookmarks content collection
Now that we have data, we can use it in our homepage to show the most recent bookmarks that have been added. To get the data, we need to access the content collection with the getCollection method from astro:content. Add the following code to the frontmatter for ./src/pages/index.astro .
---
import Layout from '../layouts/Layout.astro';
import { getCollection } from 'astro:content';
const bookmarks = await getCollection('bookmarks');
---
This code imports the getCollection method and uses it to create a new variable that contains the data in our bookmarkscollection. The bookmarks variable is an array of data, as defined by the collection, which we can use to loop through in our template.
This should pull the most recent 10 items from the array and display them on the homepage with some Tailwind styles. The main thing to note here is that the data structure has changed a little. The actual data for each item in our array actually resides in the data property of the item. This allows Astro to put additional data on the object without colliding with any details we provide in our database. Your project should now look something like this.
Now that we have data and display, let’s get to work on our search functionality.
Building search with actions and vanilla JavaScript
To start, we’ll want to scaffold out a new Astro component. In our example, we’re going to use vanilla JavaScript, but if you’re familiar with React or other frameworks that Astro supports, you can opt for client Islands to build out your search. The Astro actions will work the same.
Setting up the component
We need to make a new component to house a bit of JavaScript and the HTML for the search field and results. Create the component in a ./src/components/Search.astro file.
The basic HTML is setting up a search form, input, and results area with IDs that we’ll use in JavaScript. The basic JavaScript finds those elements, and for the form, adds an event listener that fires when the form is submitted. The event listener is where a lot of our magic is going to happen, but for now, a console log will do to make sure everything is set up properly.
Setting up an Astro Action for search
In order for Actions to work, we need our project to allow for Astro to work in server or hybrid mode. These modes allow for all or some pages to be rendered in serverless functions instead of pre-generated as HTML during the build. In this project, this will be used for the Action and nothing else, so we’ll opt for hybrid mode.
To be able to run Astro in this way, we need to add a server integration. Astro has integrations for most of the major cloud providers, as well as a basic Node implementation. I typically host on Netlify, so we’ll install their integration. Much like with Tailwind, we’ll use the CLI to add the package and it will build out the boilerplate we need.
npx astro add netlify
Once this is added, Astro is running in Hybrid mode. Most of our site is pre-generated with HTML, but when the Action gets used, it will run as a serverless function.
Setting up a very basic search Action
Next, we need an Astro Action to handle our search functionality. To create the action, we need to create a new file at ./src/actions/index.js. All our Actions live in this file. You can write the code for each one in separate files and import them into this file, but in this example, we only have one Action, and that feels like premature optimization.
In this file, we’ll set up our search Action. Much like setting up our content collections, we’ll use a method called defineAction and give it a schema and in this case a handler. The schema will validate the data it’s getting from our JavaScript is typed correctly, and the handler will define what happens when the Action runs.
For our Action, we’ll name it search and expect a schema of an object with a single property named query which is a string. The handler function will get all of our bookmarks from the content collection and use a native JavaScript .filter() method to check if the query is included in any bookmark titles. This basic functionality is ready to test with our front-end.
Using the Astro Action in the search form event
When the user submits the form, we need to send the query to our new Action. Instead of figuring out where to send our fetch request, Astro gives us access to all of our server Actions with the actions object in astro:actions. This means that any Action we create is accessible from our client-side JavaScript.
In our Search component, we can now import our Action directly into the JavaScript and then use the search action when the user submits the form.
<script>
import { actions } from "astro:actions";
const form = document.getElementById("searchForm");
const search = document.getElementById("search");
const results = document.getElementById("results");
form?.addEventListener("submit", async (e) => {
e.preventDefault();
results.innerHTML = "";
const query = search.value;
const { data, error } = await actions.search(query);
if (error) {
results.innerHTML = `<p>${error.message}</p>`;
return;
}
// create a div for each search result
data.forEach(( item ) => {
const div = document.createElement("div");
div.innerHTML = `
<a href="${item.data?.url}" class="block p-6 bg-white border border-gray-200 rounded-lg shadow-sm hover:bg-gray-100 dark:bg-gray-800 dark:border-gray-700 dark:hover:bg-gray-700">
<h3 class="mb-2 text-2xl font-bold tracking-tight text-gray-900 dark:text-white">
${item.data?.pageTitle}
</h3>
<p class="font-normal text-gray-700 dark:text-gray-400">
${item.data?.description}
</p>
</a>`;
// append the div to the results container
results.appendChild(div);
});
// show the results container
results.classList.remove("hidden");
});
</script>
When results are returned, we can now get search results!
Though, they’re highly problematic. This is just a simple JavaScript filter, after all. You can search for “Favorite” and get my favorite bread recipe, but if you search for “favorite” (no caps), you’ll get an error… Not ideal.
That’s why we should use a package like Fuse.js.
Adding Fuse.js for fuzzy search
Fuse.js is a JavaScript package that has utilities to make “fuzzy” search much easier for developers. Fuse will accept a string and based on a number of criteria (and a number of sets of data) provide responses that closely match even when the match isn’t perfect. Depending on the settings, Fuse can match “Favorite”, “favorite”, and even misspellings like “favrite” all to the right results.
Is Fuse as powerful as something like Algolia or ElasticSearch? No. Is it free and pretty darned good? Absolutely! To get Fuse moving, we need to install it into our project.
npm install fuse.js
From there, we can use it in our Action by importing it in the file and creating a new instance of Fuse based on our bookmarks collection.
In this case, we create the Fuse instance with a few options. We give it a threshold value between 0 and 1 to decide how “fuzzy” to make the search. Fuzziness is definitely something that depends on use case and the dataset. In our dataset, I’ve found 0.3 to be a great threshold.
The keys array allows you to specify which data should be searched. In this case, I want all the data to be searched, but I want to allow for different weighting for each item. The title should be most important, followed by the description, and the URL should be last. This way, I can search for keywords in all these areas.
Once there’s a new Fuse instance, we run fuse.search(query) to have Fuse check the data, and return an array of results.
When we run this with our front-end, we find we have one more issue to tackle.
The structure of the data returned is not quite what it was with our simple JavaScript. Each result now has a refIndex and an item. All our data lives on the item, so we need to destructure the item off of each returned result.
To do that, adjust the front-end forEach.
// create a div for each search result
data.forEach(({ item }) => {
const div = document.createElement("div");
div.innerHTML = `
<a href="${item.data?.url}" class="block p-6 bg-white border border-gray-200 rounded-lg shadow-sm hover:bg-gray-100 dark:bg-gray-800 dark:border-gray-700 dark:hover:bg-gray-700">
<h3 class="mb-2 text-2xl font-bold tracking-tight text-gray-900 dark:text-white">
${item.data?.pageTitle}
</h3>
<p class="font-normal text-gray-700 dark:text-gray-400">
${item.data?.description}
</p>
</a>`;
// append the div to the results container
results.appendChild(div);
});
Now, we have a fully working search for our bookmarks.
Next steps
This just scratches the surface of what you can do with Astro Actions. For instance, we should probably add additional error handling based on the error we get back. You can also experiment with handling this at the page-level and letting there be a Search page where the Action is used as a form action and handles it all as a server request instead of with front-end JavaScript code. You could also refactor the JavaScript from the admittedly low-tech vanilla JS to something a bit more robust with React, Svelte, or Vue.
One thing is for sure, Astro keeps looking at the front-end landscape and learning from the mistakes and best practices of all the other frameworks. Actions, Content Layer, and more are just the beginning for a truly compelling front-end framework.
The videos from Smashing Magazine’s recent event on accessibility were just posted the other day. I was invited to host the panel discussion with the speakers, including a couple of personal heroes of mine, Stéphanie Walter and Sarah Fossheim. But I was just as stoked to meet Kardo Ayoub who shared his deeply personal story as a designer with a major visual impairment.
I’ll drop the video here:
I’ll be the first to admit that I had to hold back my emotions as Kardo detailed what led to his impairment, the shock that came of it, and how he has beaten the odds to not only be an effective designer, but a real leader in the industry. It’s well worth watching his full presentation, which is also available on YouTube alongside the full presentations from Stéphanie and Sarah.
Email Spam Checker is an intuitive online tool that helps users determine whether their emails might trigger spam filters. By analyzing various elements of an email, including content, formatting, and header information, this tool provides a comprehensive assessment of an email’s likelihood of being marked as spam. Whether you’re a business owner sending marketing emails or an individual concerned about the deliverability of your messages, this tool offers valuable insights to optimize your email communication.
Features of Email Spam Checker AI
Real-time Email Analysis – The tool scans your email content instantly, providing immediate feedback on potential spam triggers.
Comprehensive Spam Score – Receive a detailed spam score that indicates how likely your email is to be flagged by common spam filters.
Content Evaluation – The checker analyzes the text content of your email for spam-triggering words, phrases, and patterns.
Header Analysis – The tool examines email headers for potential issues that might affect deliverability.
HTML Structure Check – For HTML emails, the tool checks the code structure for potential red flags.
Improvement Suggestions – Receive actionable recommendations to improve your email’s deliverability.
User-friendly Interface – The intuitive design makes it easy for users of all technical levels to check their emails.
Privacy-focused – Your email content is analyzed securely without storing sensitive information.
How It Works
Copy and Paste Your Email Content – Simply copy the content of your email, including the subject line and body, and paste it into the provided text field.
Include Headers (Optional) – For a more thorough analysis, you can also include the email headers.
Click ‘Check for Spam’ – Once you’ve entered your email content, click the button to initiate the analysis.
Review the Analysis Results – The tool will process your email and provide a detailed report on potential spam triggers.
Implement Suggested Changes – Use the recommendations provided to modify your email content and improve its deliverability.
Re-check If Necessary – After making changes, you can run the check again to see if your spam score has improved.
Benefits of Email Spam Checker AI
Improved Email Deliverability – By identifying and addressing potential spam triggers, you can increase the chances of your emails reaching the intended recipients.
Time and Resource Savings – Avoid the frustration and wasted resources associated with emails being filtered out before reaching recipients.
Enhanced Sender Reputation – Consistently sending non-spammy emails helps maintain a positive sender reputation with email service providers.
Marketing Campaign Optimization – For businesses, the tool helps optimize marketing emails to ensure they reach customers’ inboxes.
Real-time Feedback – Get immediate insights into potential issues with your email content before sending.
Educational Value – Learn about common spam triggers and best practices for email composition.
Reduced Risk of Blacklisting – By avoiding spam-like behavior, reduce the risk of your domain being blacklisted by email providers.
Professional Communication – Ensure your professional communications maintain a high standard of deliverability.
Pricing
Free Basic Check – A limited version allowing users to check a small number of emails per day.
Premium Plan – $9.99/month for unlimited email checks and additional analysis features.
Business Plan – $24.99/month including API access and bulk email checking capabilities.
Enterprise Solutions – Custom pricing for organizations with specific needs and high-volume requirements.
Annual Discount – Save 20% when subscribing to annual plans instead of monthly billing.
14-Day Free Trial – Available for Premium and Business plans to test all features before committing.
Review
After thorough testing, AI Para Bellum’s Email Spam Checker proves to be a reliable and efficient tool for anyone concerned about email deliverability. The interface is clean and straightforward, making it accessible even for users with limited technical knowledge. The analysis is comprehensive, covering various aspects that might trigger spam filters, from specific keywords to HTML structure.
The detailed reports provide clear insights into potential issues, and the suggested improvements are practical and easy to implement. Business users will particularly appreciate the bulk checking capabilities available in higher-tier plans, allowing for the analysis of entire email campaigns efficiently.
One notable strength is the tool’s ability to keep up with evolving spam detection algorithms used by major email providers. This ensures that the recommendations remain relevant in the constantly changing landscape of email filtering.
While the free version offers limited functionality, the paid plans provide excellent value for businesses and individuals who rely heavily on email communication. The pricing structure is reasonable considering the potential cost savings from improved email deliverability.
Conclusion
In an era where effective email communication is crucial, AI Para Bellum’s Email Spam Checker stands out as an essential tool for ensuring your messages reach their intended recipients. By providing detailed analysis and actionable recommendations, this tool helps users optimize their emails and avoid common spam triggers. Whether you’re a marketing professional managing email campaigns or an individual concerned about important messages being filtered out, this spam checker offers valuable insights to improve deliverability. With its user-friendly interface, comprehensive analysis, and reasonable pricing, AI Para Bellum’s Email Spam Checker is a worthwhile investment for anyone serious about effective email communication.
Email Spam Checker is an intuitive online tool that helps users determine whether their emails might trigger spam filters. By analyzing various elements of an email, including content, formatting, and header information, this tool provides a comprehensive assessment of an email’s likelihood of being marked as spam. Whether you’re a business owner sending marketing emails or an individual concerned about the deliverability of your messages, this tool offers valuable insights to optimize your email communication.
Features of Email Spam Checker AI
Real-time Email Analysis – The tool scans your email content instantly, providing immediate feedback on potential spam triggers.
Comprehensive Spam Score – Receive a detailed spam score that indicates how likely your email is to be flagged by common spam filters.
Content Evaluation – The checker analyzes the text content of your email for spam-triggering words, phrases, and patterns.
Header Analysis – The tool examines email headers for potential issues that might affect deliverability.
HTML Structure Check – For HTML emails, the tool checks the code structure for potential red flags.
Improvement Suggestions – Receive actionable recommendations to improve your email’s deliverability.
User-friendly Interface – The intuitive design makes it easy for users of all technical levels to check their emails.
Privacy-focused – Your email content is analyzed securely without storing sensitive information.
How It Works
Copy and Paste Your Email Content – Simply copy the content of your email, including the subject line and body, and paste it into the provided text field.
Include Headers (Optional) – For a more thorough analysis, you can also include the email headers.
Click ‘Check for Spam’ – Once you’ve entered your email content, click the button to initiate the analysis.
Review the Analysis Results – The tool will process your email and provide a detailed report on potential spam triggers.
Implement Suggested Changes – Use the recommendations provided to modify your email content and improve its deliverability.
Re-check If Necessary – After making changes, you can run the check again to see if your spam score has improved.
Benefits of Email Spam Checker AI
Improved Email Deliverability – By identifying and addressing potential spam triggers, you can increase the chances of your emails reaching the intended recipients.
Time and Resource Savings – Avoid the frustration and wasted resources associated with emails being filtered out before reaching recipients.
Enhanced Sender Reputation – Consistently sending non-spammy emails helps maintain a positive sender reputation with email service providers.
Marketing Campaign Optimization – For businesses, the tool helps optimize marketing emails to ensure they reach customers’ inboxes.
Real-time Feedback – Get immediate insights into potential issues with your email content before sending.
Educational Value – Learn about common spam triggers and best practices for email composition.
Reduced Risk of Blacklisting – By avoiding spam-like behavior, reduce the risk of your domain being blacklisted by email providers.
Professional Communication – Ensure your professional communications maintain a high standard of deliverability.
Pricing
Free Basic Check – A limited version allowing users to check a small number of emails per day.
Premium Plan – $9.99/month for unlimited email checks and additional analysis features.
Business Plan – $24.99/month including API access and bulk email checking capabilities.
Enterprise Solutions – Custom pricing for organizations with specific needs and high-volume requirements.
Annual Discount – Save 20% when subscribing to annual plans instead of monthly billing.
14-Day Free Trial – Available for Premium and Business plans to test all features before committing.
Review
After thorough testing, AI Para Bellum’s Email Spam Checker proves to be a reliable and efficient tool for anyone concerned about email deliverability. The interface is clean and straightforward, making it accessible even for users with limited technical knowledge. The analysis is comprehensive, covering various aspects that might trigger spam filters, from specific keywords to HTML structure.
The detailed reports provide clear insights into potential issues, and the suggested improvements are practical and easy to implement. Business users will particularly appreciate the bulk checking capabilities available in higher-tier plans, allowing for the analysis of entire email campaigns efficiently.
One notable strength is the tool’s ability to keep up with evolving spam detection algorithms used by major email providers. This ensures that the recommendations remain relevant in the constantly changing landscape of email filtering.
While the free version offers limited functionality, the paid plans provide excellent value for businesses and individuals who rely heavily on email communication. The pricing structure is reasonable considering the potential cost savings from improved email deliverability.
Conclusion
In an era where effective email communication is crucial, AI Para Bellum’s Email Spam Checker stands out as an essential tool for ensuring your messages reach their intended recipients. By providing detailed analysis and actionable recommendations, this tool helps users optimize their emails and avoid common spam triggers. Whether you’re a marketing professional managing email campaigns or an individual concerned about important messages being filtered out, this spam checker offers valuable insights to improve deliverability. With its user-friendly interface, comprehensive analysis, and reasonable pricing, AI Para Bellum’s Email Spam Checker is a worthwhile investment for anyone serious about effective email communication.
In the last article, we created a CSS-only star rating component using the CSS mask and border-image properties, as well as the newly enhanced attr() function. We ended with CSS code that we can easily adjust to create component variations, including a heart rating and volume control.
This second article will study a different approach that gives us more flexibility. Instead of the border-image trick we used in the first article, we will rely on scroll-driven animations!
Here is the same star rating component with the new implementation. And since we’re treading in experimental territory, you’ll want to view this in Chrome 115+ while we wait for Safari and Firefox support:
Do you spot the difference between this and the final demo in the first article? This time, I am updating the color of the stars based on how many of them are selected — something we cannot do using the border-image trick!
I highly recommend you read the first article before jumping into this second part if you missed it, as I will be referring to concepts and techniques that we explored over there.
One more time: At the time of writing, only Chrome 115+ and Edge 115+ fully support the features we will be using in this article, so please use either one of those as you follow along.
Why scroll-driven animations?
You might be wondering why we’re talking about scroll-driven animation when there’s nothing to scroll to in the star rating component. Scrolling? Animation? But we have nothing to scroll or animate! It’s even more confusing when you read the MDN explainer for scroll-driven animations:
It allows you to animate property values based on a progression along a scroll-based timeline instead of the default time-based document timeline. This means that you can animate an element by scrolling a scrollable element, rather than just by the passing of time.
But if you keep reading you will see that we have two types of scroll-based timelines: scroll progress timelines and view progress timelines. In our case, we are going to use the second one; a view progress timeline, and here is how MDN describes it:
You progress this timeline based on the change in visibility of an element (known as the subject) inside a scroller. The visibility of the subject inside the scroller is tracked as a percentage of progress — by default, the timeline is at 0% when the subject is first visible at one edge of the scroller, and 100% when it reaches the opposite edge.
Things start to make more sense if we consider the thumb element as the subject and the input element as the scroller. After all, the thumb moves within the input area, so its visibility changes. We can track that movement as a percentage of progress and convert it to a value we can use to style the input element. We are essentially going to implement the equivalent of document.querySelector("input").value in JavaScript but with vanilla CSS!
The implementation
Now that we have an idea of how this works, let’s see how everything translates into code.
I know, this is a lot of strange syntax! But we will dissect each line and you will see that it’s not all that complex at the end of the day.
The subject and the scroller
We start by defining the subject, i.e. the thumb element, and for this we use the view-timeline shorthand property. From the MDN page, we can read:
The view-timelineCSSshorthand property is used to define a named view progress timeline, which is progressed through based on the change in visibility of an element (known as the subject) inside a scrollable element (scroller). view-timeline is set on the subject.
I think it’s self-explanatory. The view timeline name is --val and the axis is inline since we’re working along the horizontal x-axis.
Next, we define the scroller, i.e. the input element, and for this, we use overflow: hidden (or overflow: auto). This part is the easiest but also the one you will forget the most so let me insist on this: don’t forget to define overflow on the scroller!
I insist on this because your code will work fine without defining overflow, but the values won’t be good. The reason is that the scroller exists but will be defined by the browser (depending on your page structure and your CSS) and most of the time it’s not the one you want. So let me repeat it another time: remember theoverflowproperty!
The animation
Next up, we create an animation that animates the --val variable between the input’s min and max values. Like we did in the first article, we are using the newly-enhanced attr() function to get those values. See that? The “animation” part of the scroll-driven animation, an animation we link to the view timeline we defined on the subject using animation-timeline. And to be able to animate a variable we register it using @property.
Note the use of timeline-scope which is another tricky feature that’s easy to overlook. By default, named view timelines are scoped to the element where they are defined and its descendant. In our case, the input is a parent element of the thumb so it cannot access the named view timeline. To overcome this, we increase the scope using timeline-scope. Again, from MDN:
timeline-scope is given the name of a timeline defined on a descendant element; this causes the scope of the timeline to be increased to the element that timeline-scope is set on and any of its descendants. In other words, that element and any of its descendant elements can now be controlled using that timeline.
Never forget about this! Sometimes everything is correctly defined but nothing is working because you forget about the scope.
There’s something else you might be wondering:
Why are the keyframes values inverted? Why is the min is set to 100% and the max set to 0%?
To understand this, let’s first take the following example where you can scroll the container horizontally to reveal a red circle inside of it.
Initially, the red circle is hidden on the right side. Once we start scrolling, it appears from the right side, then disappears to the left as you continue scrolling towards the right. We scroll from left to right but our actual movement is from right to left.
In our case, we don’t have any scrolling since our subject (the thumb) will not overflow the scroller (the input) but the main logic is the same. The starting point is the right side and the ending point is the left side. In other words, the animation starts when the thumb is on the right side (the input’s max value) and will end when it’s on the left side (the input’s min value).
The animation range
The last piece of the puzzle is the following important line of code:
animation-range: entry 100% exit 0%;
By default, the animation starts when the subject starts to enter the scroller from the right and ends when the subject has completely exited the scroller from the left. This is not good because, as we said, the thumb will not overflow the scroller, so it will never reach the start and the end of the animation.
To rectify this we use the animation-range property to make the start of the animation when the subject has completely entered the scroller from the right (entry 100%) and the end of the animation when the subject starts to exit the scroller from the left (exit 0%).
To summarize, the thumb element will move within input’s area and that movement is used to control the progress of an animation that animates a variable between the input’s min and max attribute values. We have our replacement for document.querySelector("input").value in JavaScript!
What’s going on with all the --val instances everywhere? Is it the same thing each time?
I am deliberately using the same --val everywhere to confuse you a little and push you to try to understand what is going on. We usually use the dashed ident (--) notation to define custom properties (also called CSS variables) that we later call with var(). This is still true but that same notation can be used to name other things as well.
In our examples we have three different things named --val:
The variable that is animated and registered using @property. It contains the selected value and is used to style the input.
The named view timeline defined by view-timeline and used by animation-timeline.
The keyframes named --val and called by animation.
Here is the same code written with different names for more clarity:
All that we have done up to now is get the selected value of the input range — which is honestly about 90% of the work we need to do. What remains is some basic styles and code taken from what we made in the first article.
If we omit the code from the previous section and the code from the previous article here is what we are left with:
We make the thumb invisible and we define a gradient on the main element to color in the stars. No surprise here, but the gradient uses the same --val variable that contains the selected value to inform how much is colored in.
When, for example, you select three stars, the --val variable will equal 3 and the color stop of the first color will equal 3*100%/5 , or 60%, meaning three stars are colored in. That same color is also dynamic as I am using the hsl() function where the first argument (the hue) is a function of --val as well.
Here is the full demo, which you will want to open in Chrome 115+ at the time I’m writing this:
And guess what? This implementation works with half stars as well without the need to change the CSS. All you have to do is update the input’s attributes to work in half increments. Remember, we’re yanking these values out of HTML into CSS using attr(), which reads the attributes and returns them to us.
<input type="range" min=".5" step=".5" max="5">
That’s it! We have our rating star component that you can easily control by adjusting the attributes.
So, should I use border-image or a scroll-driven animation?
If we look past the browser support factor, I consider this version better than the border-image approach we used in the first article. The border-image version is simpler and does the job pretty well, but it’s limited in what it can do. While our goal is to create a star rating component, it’s good to be able to do more and be able to style an input range as you want.
With scroll-driven animations, we have more flexibility since the idea is to first get the value of the input and then use it to style the element. I know it’s not easy to grasp but don’t worry about that. You will face scroll-driven animations more often in the future and it will become more familiar with time. This example will look easy to you in good time.
Worth noting, that the code used to get the value is a generic code that you can easily reuse even if you are not going to style the input itself. Getting the value of the input is independent of styling it.
Many techniques are involved to create that demo and one of them is using scroll-driven animations to get the input value and show it inside the tooltip!
This one is a bit crazy but it illustrates how far we go with styling an input range! So, even if your goal is not to create a star rating component, there are a lot of use cases where such a technique can be really useful.
Conclusion
I hope you enjoyed this brief two-part series. In addition to a star rating component made with minimal code, we have explored a lot of cool and modern features, including the attr() function, CSS mask, and scroll-driven animations. It’s still early to adopt all of these features in production because of browser support, but it’s a good time to explore them and see what can be done soon using only CSS.
That an invalid custom property invalidates the entire declaration isn’t surprising, but I didn’t consider it until I saw one of my declarations break. I guess it’s just good to know that, especially if you’re working a lot with custom properties.
This easily qualifies as a “gotcha” in CSS and is a good reminder that the cascade doesn’t know everything all at the same time. If a custom property is invalid, the cascade won’t ignore it, and it gets evaluated, which invalidates the declaration. And if we set an invalid custom property on a shorthand property that combines several constituent properties — like how background and animation are both shorthand for a bunch of other properties — then the entire declaration becomes invalid, including all of the implied constituents. No bueno indeed.
What to do, then?
So, maybe don’t use custom properties in shorthand properties or use custom properties but don’t use shorthand properties.
Grouping selected items is a design choice often employed to help users quickly grasp which items are selected and unselected. For instance, checked-off items move up the list in to-do lists, allowing users to focus on the remaining tasks when they revisit the list.
We’ll design a UI that follows a similar grouping pattern. Instead of simply rearranging the list of selected items, we’ll also lay them out horizontally using CSS Grid. This further distinguishes between the selected and unselected items.
We’ll explore two approaches for this. One involves using auto-fill, which is suitable when the selected items don’t exceed the grid container’s boundaries, ensuring a stable layout. In contrast, CSS Grid’s span keyword provides another approach that offers greater flexibility.
The markup consists of an unordered list (<ul>). However, we don’t necessarily have to use <ul> and <li> elements since the layout of the items will be determined by the CSS grid properties. Note that I am using an implicit <label> around the <input> elements mostly as a way to avoid needing an extra wrapper around things, but that explicit labels are generally better supported by assistive technologies.
Method 1: Using auto-fill
ul {
width: 250px;
display: grid;
gap: 14px 10px;
grid-template-columns: repeat(auto-fill, 40px);
justify-content: center;
/* etc. */
}
The <ul> element, which contains the items, has a display:grid style rule, turning it into a grid container. It also has gaps of 14px and 10px between its grid rows and columns. The grid content is justified (inline alignment) to center.
The grid-template-columns property specifies how column tracks will be sized in the grid. Initially, all items will be in a single column. However, when items are selected, they will be moved to the first row, and each selected item will be in its own column. The key part of this declaration is the auto-fill value.
The auto-fill value is added where the repeat count goes in the repeat() function. This ensures the columns repeat, with each column’s track sizing being the given size in repeat() (40px in our example), that will fit inside the grid container’s boundaries.
For now, let’s make sure that the list items are positioned in a single column:
li {
width: inherit;
grid-column: 1;
/* Equivalent to:
grid-column-start: 1;
grid-column-end: auto; */
/* etc. */
}
When an item is checked, that is when an <li> element :has() a :checked checkbox, we’re selecting that. And when we do, the <li> is given a grid-area that puts it in the first row, and its column will be auto-placed within the grid in the first row as per the value of the grid-template-columns property of the grid container (<ul>). This causes the selected items to group at the top of the list and be arranged horizontally:
li {
width: inherit;
grid-column: 1;
/* etc. */
&:has(:checked) {
grid-area: 1;
/* Equivalent to:
grid-row-start: 1;
grid-column-start: auto;
grid-row-end: auto;
grid-column-end: auto; */
width: 40px;
/* etc. */
}
/* etc. */
}
And that gives us our final result! Let’s compare that with the second method I want to show you.
Method 2: Using the span keyword
We won’t be needing the grid-template-columns property now. Here’s the new <ul> style ruleset:
ul {
width: 250px;
display: grid;
gap: 14px 10px;
justify-content: center;
justify-items: center;
/* etc. */
}
The inclusion of justify-items will help with the alignment of grid items as we’ll see in a moment. Here are the updated styles for the <li> element:
li {
width: inherit;
grid-column: 1 / span 6;
/* Equivalent to:
grid-column-start: 1;
grid-column-end: span 6; */
/* etc. */
}
As before, each item is placed in the first column, but now they also span six column tracks (since there are six items). This ensures that when multiple columns appear in the grid, as items are selected, the following unselected items remain in a single column under the selected items — now the unselected items span across multiple column tracks. The justify-items: center declaration will keep the items aligned to the center.
li {
width: inherit;
grid-column: 1 / span 6;
/* etc. */
&:has(:checked) {
grid-area: 1;
width: 120px;
/* etc. */
}
/* etc. */
}
The width of the selected items has been increased from the previous example, so the layout of the selection UI can be viewed for when the selected items overflow the container.
Selection order
The order of selected and unselected items will remain the same as the source order. If the on-screen order needs to match the user’s selection, dynamically assign an incremental order value to the items as they are selected.
CSS Grid helps make both approaches very flexible without a ton of configuration. By using auto-fill to place items on either axis (rows or columns), the selected items can be easily grouped within the grid container without disturbing the layout of the unselected items in the same container, for as long as the selected items don’t overflow the container.
If they do overflow the container, using the span approach helps maintain the layout irrespective of how long the group of selected items gets in a given axis. Some design alternatives for the UI are grouping the selected items at the end of the list, or swapping the horizontal and vertical structure.
QuestionAI represents the next evolution in question-answering technology, bridging the gap between vast information repositories and human curiosity. Unlike traditional search engines that return lists of potentially relevant links, QuestionAI directly delivers concise, accurate answers drawn from analyzing multiple sources. The platform understands natural language queries, recognizes context, and processes information much like a human researcher would—but at machine speed. Designed for professionals, researchers, students, and curious minds alike, QuestionAI serves as an intelligent assistant that not only answers questions but also helps users explore topics more deeply through related insights and explanations.
Features
Natural Language Processing: Ask questions in everyday language just as you would to a human expert, without needing to format queries in any special way.
Contextual Understanding: The AI maintains conversation context, allowing for follow-up questions without repeating background information.
Multi-Source Analysis: Draws information from diverse and verified sources to provide comprehensive answers that consider multiple perspectives.
Citation Tracking: Clearly identifies the sources of information, allowing users to verify facts and explore topics more deeply.
Customizable Knowledge Base: Ability to connect to specific document sets or databases for specialized questioning in professional environments.
Visual Response Options: Receives responses not just as text but also as charts, graphs, and other visualizations when appropriate.
Multilingual Support: Ask questions and receive answers in multiple languages, breaking down language barriers to information.
Conversation History: Maintains a searchable log of previous questions and answers for easy reference and continued learning.
Integration Capabilities: Connects with popular productivity tools, research platforms, and learning management systems.
Privacy Controls: Robust security features ensure sensitive questions and information remain confidential.
How It Works
Question Input: Users enter their question in natural language through the clean, intuitive interface.
AI Processing: The system analyzes the question to understand intent, context, and required information.
Knowledge Base Search: QuestionAI scans its vast knowledge repositories, identifying relevant information from verified sources.
Answer Synthesis: Rather than simply retrieving information, the AI synthesizes a coherent, comprehensive answer tailored to the specific question.
Source Attribution: The system attributes information to its original sources, allowing users to verify claims and explore further.
Refinement Loop: Users can ask follow-up questions or request clarification, with the system maintaining context from the previous exchanges.
Continuous Learning: The platform improves over time, learning from interactions to provide increasingly relevant and accurate responses.
Benefits
Time Efficiency: Receive immediate answers to complex questions without sifting through multiple search results or documents.
Information Accuracy: Benefit from answers drawn from verified sources with clear citations, reducing the risk of misinformation.
Knowledge Depth: Explore topics more thoroughly with an AI that can handle increasingly specific and technical questions.
Research Enhancement: Accelerate research processes by quickly establishing foundational knowledge before diving deeper.
Learning Support: Supplement educational experiences with explanations that adapt to different knowledge levels.
Decision Support: Make more informed decisions by accessing relevant information quickly and in context.
Productivity Boost: Reduce time spent searching for information and more time applying insights to tasks and projects.
Linguistic Accessibility: Access information regardless of language barriers through multilingual support.
Reduced Cognitive Load: Offload the mental effort of searching and synthesizing information to an AI assistant.
Continuous Knowledge Access: Stay informed with up-to-date information through a system that regularly updates its knowledge base.
Pricing
Free Tier: Limited daily questions with basic functionality, perfect for casual users exploring the platform.
Personal Plan ($19/month): Unlimited questions, full feature access, and priority processing for individual users.
Professional Plan ($49/month): Enhanced capabilities including document uploads, team sharing features, and advanced analytics.
Enterprise Solution (Custom Pricing): Tailored implementation with private knowledge bases, enhanced security, dedicated support, and custom integrations.
Educational Discount: Special pricing for students and educational institutions with additional learning-focused features.
Annual Subscription: Save 20% when purchasing an annual subscription for any paid tier.
Review
QuestionAI delivers on its promise to revolutionize information retrieval through artificial intelligence. During our extensive testing, the platform consistently provided relevant, accurate answers to questions ranging from straightforward facts to complex conceptual inquiries.
The natural language processing capabilities are particularly impressive, accurately interpreting questions even when phrased conversationally or with ambiguous terms. The system rarely misunderstands intent and, when it does, the clarification process is smooth and intuitive.
Conclusion
QuestionAI stands at the forefront of a new paradigm in information access, where artificial intelligence serves not just as a tool for finding information but as an active partner in knowledge exploration and synthesis. By combining advanced natural language processing, comprehensive knowledge repositories, and intuitive design, it delivers on the promise of AI as an extension of human cognitive capabilities.
In today’s data-driven business landscape, making sense of complex information quickly can be the difference between success and stagnation. Enter MyReport by Alaba AI, a cutting-edge tool designed to transform how professionals interact with and extract value from their data. This innovative platform leverages the power of artificial intelligence to streamline report creation, making business intelligence accessible to everyone, regardless of their technical expertise.
Introduction
MyReport stands at the intersection of artificial intelligence and business analytics, offering a seamless solution for professionals seeking to make data-driven decisions without the typical technical hurdles. The platform enables users to generate comprehensive, visually appealing reports from raw data in minutes rather than hours. By eliminating the tedious aspects of report creation, MyReport frees up valuable time for analysis and strategic planning, allowing teams to focus on what truly matters: extracting actionable insights from their data.
Features of MyReport by Alaba AI
Natural Language Processing: Simply tell the AI what kind of report you need in plain English, and watch as MyReport transforms your request into a structured document.
Multi-Format Data Handling: Upload data in various formats including CSV, Excel, PDF, and more without worrying about compatibility issues.
Automated Data Visualization: The platform intelligently selects the most appropriate charts, graphs, and visual representations for your data.
Customizable Templates: Choose from professionally designed templates or create your own to maintain brand consistency across all reports.
Collaborative Workspace: Multiple team members can work on reports simultaneously, leaving comments and suggestions in real-time.
AI-Powered Insights: Beyond visualization, MyReport highlights key trends, anomalies, and potential opportunities hidden in your data.
One-Click Sharing Options: Distribute finished reports across various channels with ease, including direct links, email, and integration with common workplace tools.
Revision History: Track changes and revert to previous versions if needed, ensuring nothing gets lost in the iteration process.
How It Works
Data Upload: Begin by uploading your raw data files to MyReport through a simple drag-and-drop interface.
AI Analysis: The system automatically analyzes your data, identifying relationships, patterns, and potential points of interest.
Report Generation: Using natural language prompts, tell the AI what aspects of the data you want to highlight in your report.
Customization: Fine-tune the generated report by adjusting visualizations, adding commentary, or rearranging sections to better tell your data story.
Collaboration: Invite team members to review and contribute to the report before finalization.
Export and Share: Download your completed report in multiple formats or share it directly with stakeholders through various channels.
Benefits of MyReport by Alaba AI
Time Efficiency: Reduce report creation time by up to 80% with MyReport, allowing your team to focus on analysis rather than formatting.
Accessibility: Democratize data analysis by eliminating the need for specialized technical skills to create professional reports.
Consistency: Ensure all company reports follow the same high-quality standards regardless of who creates them.
Enhanced Decision Making: Surface insights that might be missed in manual analysis, leading to more informed business decisions.
Cost Reduction: Lower the overhead costs associated with dedicated data analysis teams or specialized software training.
Scalability: Whether you’re creating one report a month or dozens a day, MyReport scales to meet your needs without performance degradation.
Error Reduction: Minimize human errors in data handling and calculation through automated processing.
Professional Presentation: Impress clients and stakeholders with polished, visually appealing reports that effectively communicate complex information.
Pricing
Free Tier: Access basic MyReport features with limited monthly reports and standard templates. Perfect for individuals or small teams just getting started.
Professional Plan ($29/month): Unlock advanced visualization options, custom branding, and increased report volume. Ideal for growing businesses with regular reporting needs.
Business Plan ($79/month): Gain access to all premium features, priority processing, and team collaboration tools. Designed for established companies with multiple departments.
Enterprise Solution (Custom Pricing): Tailored packages for large organizations with specific requirements, including dedicated support, custom integrations, and unlimited reporting capacity.
Annual Discount: Save 20% on any paid plan when billed annually rather than monthly.
Review
After extensive testing, MyReport by Alaba AI proves itself to be a formidable ally in the quest for data-driven decision making. The platform’s intuitive interface belies its powerful capabilities, making it accessible to novices while still satisfying the demands of experienced data professionals.
The natural language processing functionality stands out as particularly impressive, accurately interpreting even complex requests with minimal guidance. During our testing, we found that MyReport could handle industry-specific terminology with surprising fluency, suggesting a robust training model behind the scenes.
Conclusion
MyReport by Alaba AI emerges as a powerful solution in the evolving landscape of business intelligence tools. By combining sophisticated AI capabilities with user-friendly design, it effectively bridges the gap between raw data and actionable insights. While not without its learning curve, the platform’s ability to dramatically reduce report creation time while improving output quality makes it a compelling choice for organizations of all sizes.
I’ve been a bit sucked into the game Balatro lately. Seriously. Tell me your strategies. I enjoy playing it equally as much lately as unwinding watching streamers play it on YouTube. Balatro has a handful of accessibility features. Stuff like slowing down or turning off animations and the like. I’m particularly interested one of the checkboxes below though:
“High Contrast Cards” is one of the options. It’s a nice option to have, but I find it particularly notable because of it’s popularity. You know those streamers I mentioned? The all seem to have this option turned on. Interesting how an “accessibility feature” actually seems to make the game better for everybody. As in, maybe the default should be reversed or just not there at all, with the high contrast version being just how it is.
It reminds me about how half of Americans, particularly the younger generation, prefer having closed captioning on TV some or all of the time. An accessibility feature that they just prefer.
Interestingly, the high contrast mode in Balatro mostly focuses on changing colors.
If you don’t suffer from any sort of colorblindness (like me? I think?) you’ll notice the clubs above are blue, which differentiates them from the spades which remain black. The hearts and clubs are slightly differentiated with the diamonds being a bit more orange than red.
Is that enough? It’s enough for many players preferring it, likely preventing accidentally playing a flush hand with the wrong suits, for example. But I can’t vouch for if it works for people with actual low vision or a type of color blindness, which is what I’d assume would be the main point of the feature. Andy Baio wrote a memorable post about colorblindness a few years ago called Chasing rainbows. There are some great examples in that post that highlight the particular type of colorblindness Andy has. Sometimes super different colors look a lot closer together than you’d expect, but still fairly distinct. Where sometimes two colors that are a bit different actually appear identical to Andy.
So maybe the Balatro colors are enough (lemme know!) or maybe they are not. I assume that’s why a lot of “high contrast” variations do more than color, they incorporate different patterns and whatnot. Which, fair enough, the playing cards of Balatro already do.
Let’s do a few more fun CSS and color related links to round out the week:
Adam Argyle: A conic gradient diamond and okLCH — I’m always a little surprised at the trickery that conic gradients unlock. Whenever I think of them I’m like uhmmmmm color pickers and spinners I guess?
Michelle Barker: Creating color palettes with the CSS color-mix() function — Sure, color-mix() is nice for a one-off where you’re trying to ensure contrast or build the perfect combo from an unknown other color, but it can also be the foundational tool for a system of colors.
Keith Grant: Theme Machine — A nice take on going from choosing nice individual colors to crafting palettes, seeing them in action, and getting custom property output for CSS.
A much-needed disclaimer:You (kinda) can use functions now! I know, it isn’t the most pleasant feeling to finish reading about a new feature just for the author to say “And we’ll hopefully see it in a couple of years”. Luckily, right now you can use an (incomplete) version of CSS functions in Chrome Canary behind an experimental flag, although who knows when we’ll get to use them in a production environment.
Arguments, defaults, and returns!
I was drinking coffee when I read the news on Chrome prototyping functions in CSS and… I didn’t spit it or anything. I was excited, but thought “functions” in CSS would be just like mixins in Sass — you know, patterns for establishing reusable patterns. That’s cool but is really more or less syntactic sugar for writing less CSS.
But I looked at the example snippet a little more closely and that’s when the coffee nearly came shooting out my mouth.
Arguments?! Return values?! That’s worth spitting my coffee out for! I had to learn more about them, and luckily, the spec is clearly written, which you can find right here. What’s crazier, you can use functions right now in Chrome Canary! So, after reading and playing around, here are my key insights on what you need to know about CSS Functions.
What exactly is a function in CSS?
I like this definition from the spec:
Custom functions allow authors the same power as custom properties, but parameterized
They are used in the same places you would use a custom property, but functions return different things depending on the argument we pass. The syntax for the most basic function is the @function at-rule, followed by the name of the function as a <dashed-ident> + ()
@function --dashed-border() {
/* ... */
}
A function without arguments is like a custom property, so meh… To make them functional we can pass arguments inside the parenthesis, also as <dashed-ident>s
@function --dashed-border(--color) {
/* ... */
}
We can use the result descriptor to return something based on our argument:
@function --dashed-border(--color) {
result: 2px dashed var(--color);
}
div {
border: --dashed-border(blue); /* 2px dashed blue */
}
We can even use defaults! Just write a colon (:) followed by the default value for that argument.
@function --dashed-border(--color: red) {
result: 2px dashed var(--color);
}
div {
border: --dashed-border(); /* 2px dashed red */
}
Functions can have type-checking for arguments and return values, which will be useful whenever we want to interpolate a value just like we do with variables created with @property, and once we have inline conditionals, to make different calculations depending on the argument type.
To add argument types, we pass a syntax component. That is the type enclosed in angle brackets, where color is <color> and length is <length>, just to name a couple. There are also syntax multipliers like plus (+) to accept a space-separated list of that type.
@function --custom-spacing(--a <length>) { /* ... */ } /* e.g. 10px */
@function --custom-background(--b <color>) { /* ... */ } /* e.g. hsl(50%, 30% 50%) */
@function --custom-margin(--c <length>+) { /* ... */ } /* e.g. 10px 2rem 20px */
If instead, we want to define the type of the return value, we can write the returns keyword followed by the syntax component:
Just a little exception for types: if we want to accept more than one type using the syntax combinator (|), we’ll have to enclose the types in a type() wrapper function:
While it doesn’t currently seem to work in Canary, we’ll be able in the future to take lists as arguments by enclosing them inside curly braces. So, this example from the spec passes a list of values like {1px, 7px, 2px} and gets its maximum to perform a sum.
I wonder then, will it be possible to select a specific element from a list? And also define how long should the list should be? Say we want to only accept lists that contain four elements, then select each individually to perform some calculation and return it. Many questions here!
Early returns aren’t possible
That’s correct, early returns aren’t possible. This isn’t something defined in the spec that hasn’t been prototyped, but something that simply won’t be allowed. So, if we have two returns, one enclosed early behind a @media or @supports at-rule and one outside at the end, the last result will always be returned:
We have to change the order of the returns, leaving the conditional result for last. This doesn’t make a lot of sense in other programming languages, where the function ends after returning something, but there is a reason the C in CSS stands for Cascade: this order allows the conditional result to override the last result which is very CSS-y is nature:
Here I wanted everyone to chip in and write about the new things we could make using functions. So the team here at CSS-Tricks put our heads together and thought about some use cases for functions. Some are little helper functions we’ll sprinkle a lot throughout our CSS, while others open new possibilities. Remember, all of these examples should be viewed in Chrome Canary until support expands to other browsers.
Here’s a basic helper function from Geoff that sets fluid type:
This is one of those snippets I’m always grabbing from Steph Eckles’ smolcss site, and having a function would be so much easier. Actually, most of the snippets on Steph’s site would be awesome functions.
This one is from moi. When I made that demo using tan(atan2()) to create viewport transitions, I used a helper property called --wideness to get the screen width as a decimal between 0 to 1. At that moment, I wished for a function form of --wideness. As I described it back then:
You pass a lower and upper bound as pixels, and it will return a 0 to 1 value depending on how wide the screen is. So for example, if the screen is 800px, wideness(400px, 1200px) would return 0.5 since it’s the middle point
I thought I would never see it, but now I can make it myself! Using that wideness function, I can move an element through its offset-path as the screen goes from 400px to 800px:
.marker {
offset-path: path("M 5 5 m -4, 0 a 4,4 0 1,0 8,0 a 4,4 0 1,0 -8,0"); /* Circular Orbit */
offset-distance: calc(--wideness(400, 800) * 100%); /* moves the element when the screen goes from 400px to 800px */
}
…use local variables. Although I tried them and they seem to work.
…use recursive functions (they crash!),
…list arguments,
…update a function and let the appropriate styles change,
…use @function in cascade layers, or in the CSS Object Model (CSSOM),
…use “the Iverson bracket functions … so any @media queries or similar will need to be made using helper custom properties (on :root or similar).”
After reading what on earth an Iverson bracket is, I understood that we currently can’t have a return value behind a @media or @support rule. For example, this snippet from the spec shouldn’t work:
Although, upon testing, it seems like it’s supported now. Still, we can use a provisional custom property and return it at the end if it isn’t working for you:
What about mixins? Soon, they’ll be here. According to the spec:
At this time, this specification only defines custom functions, which operate at the level of CSS values. It is expected that it will define “mixins” later, which are functions that operate at the style rule level.
Here you will get list of some best github alternatives that provide private and public repository.
Being into software development we very often find ourselves in the need to host our code to any website. For the purpose, masses are blindly following one single medium for this, Github. It can not be denied that Github users have their choice to use either Git or Subversion for version control. Also there is a facility of unlimited public code repository for all users of Github. One more fascinating feature of Github is that allows to create ‘organizations’, which at its own is a normal account but at least one user account is required to be listed as the owner of the organization.
Apart from providing desktop apps for Windows and OSX, Github also provides the facility to its users and organizations to host one website and unlimited project pages for free on the Github’s website. The typical domain for the hosted websites look something like username.github.io and address of the project pages may look like username.github.io/project-page.
Moving ahead, we have compiled a list of few other mediums that can also be used in place Github without any harm. So come let’s have a look on the list.
On contrary to the Github, the Bitbucket comes just next to it in terms of usage and global popularity. Bitbucket also provides a free account for the users and organizations as well with limit for five users. Also, it provides access to unlimited private and public repos. One of the features which is note worthy is its allowance for the users to puch their files using any of the Git client/Git command line.
Atlassian is the developer of Bitbucket providing access to the version capability to the users using their web interface. A free Mac and Windows interface is also available for using Gitbucket’s own Git and Mercurial client Source Tree.
The domain for your hosted website on Bitbucket will look something like: accountname.bitbucket.org and domain for that of project pages will be like: accountname.bitbucket.org/project. On the other hand Bitbucket also allows its users to use their own domain name for their website.
Beanstalk as another good Github alternative but it is not free. You can get a trial of the resource for two weeks after which if you wish to continue you will have a pay an amount of minimum $15 for its cheapest Bronze package. Bronze package lets you have maximum of 10 repositories with 3 Gigabytes of storage capacity and maximum upto 5 users.
Beanstalk supports the most demanded Git and Subversion control systems for version control. It is developed by Wildbit and also allows for code editing in the browser itself so that user need not to switch to command line every now and then.
GitLab is popular among the users due to its features like dedicated project website and an integrated project wiki. Also GitLab facilitates its users by providing automated testing and code delivery so that a user can do more work in lesser time without waiting for the tests to pass manually. Some of the else features to be noted are pull requests, code viewer and merge conflict resolution.
Developed by Fog Creek, unlike Github Kiln is not a free source to host your software or website. You can have an overview or experience of their version control and code hosting for Git and Mercurial for 30 days trial period, after that users need to upgrade to the premium version (minimum $18 a month) inorder to continue working with Kiln. Kiln also charges its users for the code review module separately.
If you host your website with Kiln, your domain will look something like this:
It is believed by observing abundance of projects being hosted on the SourceForge that it has existed for a longer time. When compared to the Github, SourceForge (developed by Slashdot Media) has an entirely different structure of the project. Unlike other websites for version control, SourceForge allows you to host both static and dynamic pages as well. One of the vulnerability of this medium for version control is that a user is allowed to create projects and get it hosted on the site with unique names only.
Typical domain for your hosted project will look like proj.sourceforge.net
Scripting languages like Python, Perl, PHP, Tcl, Ruby and Shell are being supported by the SourceForge servers. Users are free to choosing either Git, Subversion or Mercurial for the version control system.
This Google’s Git version control came into existence and moved to the Google Cloud platform when Google code was put out of the market by google itself. Although google provides its own repositories to work upon, but you can even connect the Cloud Source to other version control mediums like Github, Bitbucket, etc. Cloud Source offers storage for its users codes and apps across the google infrastructure itself which makes it even more reliable. Users have the freeship to search their code in the browser itself and also gets feature of cloud diagnostics to track the problems while code keeps running in the background.
Cloud Source offers Stackdriver Debugger that helps use the debugger in parallel with the other applications running.
GitKraken became popular among the developers day by day due to the exclusive features it provides to it users are just adorable. The primary point of attraction towards Gitkraken is its beautiful interface and also it focus on speed and ease of use for Git. GitKraken comes with an incredibly handy ‘undo’ button which helps its users to quickly omit the redundancies occurred by mistake. GitKraken provides a free version which can have upto 20 users and a premium version as well with several other good features.
We hope you guys enjoyed learning with us. If any doubts, queries or suggestions please lets us know in the comment section below. Do share in comments if you know any other good github alternatives.
For a lot of programmers, finding a decent keyboard will significantly boost their workflow environment. Here, in this article, you’ll find a list of best keyboards for programming and a strait-laced comparison between them.
The programmers typically spend their days’ planning, writing, and checking code in their machine. The keyboard is, therefore, one of the most critical instruments in their arsenal. The right keyboard for your necessity is not an easy task to find. This is why we are collecting the list of best programming keyboards.
It’s not just ambient RGB lighting or a cool minimum aesthetic you get by choosing an excellent keyboard for your typeface. Gamers who play the best computer games or content developers and professional programmers, all that matters for them are pace, reactivity, precision, and comfort.
What is that you need?
It would be best if you had a keyboard to ease your work. A keyboard that you can type without pressure during the day and program it quickly on your computer to reach the most frequent applications. A large keyboard can reduce the amount and usability of your cursor.
Also, a keyboard that has a comfortable plushy wrist brace, magnetically connected when you are working long hours, and having extra support for your hand. The keyboard that also has media functions for changing speed, listening, pause, or skipping songs.
Ultimately, a keyboard that helps you swift in your daily work!
Here are the best claviers, from the best game keyboards to the perfect ones for productivity and creation. And we have included a comparison table to make sure you get the best price available as well.
The Redgear Blaze Semi-Mechanical wired Gaming keyboard comes with three colors backlit light, full aluminum body & Windows key lock for PC. The Blaze is designed especially for pro-gaming and programming. It was intended for both programmers and low or high DPI players, so the interface allows ample room to spread the mouse around.
In addition, for each strain, the Ergonomic float caps give you the best space. The Blaze has perfectly positioned keycaps that provide precise results whenever you click.
Features:
100M long-lasting switches for crisp response.
3 Color mode for different gaming setup needs.
Windows key lock option to block pop-up in the game.
19 keys anti-ghost for gamers and programmers
Floating keycaps with greater durability and high responsiveness
This keyboard comes with the wired USB interface, especially for programmers and players with 19 Anti-ghosting keys. The four levels of brightness adjustment make it convenient for users to work with peace of mind. The keyboard features different internet hotkeys and media hotkeys for easy access.
Features:
Wired USB keyboard interface with Red Backlit and four levels of brightness adjustment.
Max Plus is a robust 104-key USB mechanical keyboard with 12 additional multimedia keys. It has 7 LED modes and five LED levels of luminosity in multicolor LEDs. For all this gaming action, it is a heavy-duty keyboard weighing 1.27 kg.
The keyboard is sturdy and durable that gives the user a perfect tactile feel. The blue switches on this keyboard have a very detectable actuation point that helps programmers to code swiftly.
Features:
Full-size mechanical keyboard with 104 keys and 11 multimedia keys.
Backlit keyboard with 10 LED night modes and 5 brightness adjustments levels.
Double injection keycaps for a longer lifespan and a higher number of keystrokes.
High-quality USB connector coupled with strong braided cable.
TVS Gold Bharat programming keyboard comes with an array of 104 mechanical keys with long life switches. The keyboard also houses nine (9) vernacular languages to choose from, and also, an additional feature to select the interface – USB/PS2. Besides, the sculpted keypad gives the touch feeling with each click.
Features:
Guaranteed 50 Million plus strokes per key
Highly reliable, with more than 200,000 Hrs MTBF
An everlasting presence with Laser Etched Characters on Keycaps
The option of working in two languages
Fitted with mechanical switches for long life
The keyboards are all enabled with the Rupee symbols
The Zeb-Transformer-k is a USB gaming keyboard with a multicolor LED effect. It has integrated media control, Laser keycaps, and an aluminum body. It also has a braided cable, high-quality USB connector, and backlight LED On/Off function. The keyboard comes with the interface of USB and a power requirement of DC 5V, <200mA. Moreover, it has a button stokes life of 80 million times.
Features:
Integrated Media control keys and multicolor LED with 4 modes – 3 Light Mode & 1 off Mode
Windows Key Enable/Disable Function and all Keys Enable/Disable Function
2-Step Stand, Laser Keycaps, Aluminum Body, Backlight LED ON/OFF function
Gold Plated USB, Braided Cable, modern design, and less power consumption.
The HyperX Alloy FPS RGB is a splendid, high-performance keyboard to ensure your skills and style are fully displayed. The robust stainless steel frame makes your keyboard stable as you pull the key to function, detect errors, or swap program scripts.
The Alloy FPS RGB is designed for space-contracted setups, so you can maneuver easily without increasing your sensitivity to your mouse. It is also provided with a convenient USB load port and a braided, wear-resistant cable that makes portability easier.
Features:
RGB backlit keys with radiant lighting effects and a durable solid steel frame
Advanced customization with HyperX NGENUITY software and onboard memory for three profiles
Compact, ultra-portable design with detachable cable and convenient USB charge port
Kailh Silver Speed mechanical key switches
Game Mode, 100% Anti-ghosting, and N-Key Rollover functionalities
The Logitech Prodigy series offers advanced gaming-grade performance. Besides, for programmers, each keypress from fingers to screen is nearly instantaneous. It also accommodates the customization of five individual lighting zones with a range of over 16.8 million colors. Logitech Gaming Software can even synchronize lighting effects with other Logitech G devices for a real match system.
Logitech G Prodigy allows users to work more quickly than with a standard keyboard thanks to high-performance keys, which combine the best touch and programming performance.
Features:
4x faster gaming-grade performance than standard keyboards
Crisp and brilliant RGB lighting with 16.8 Million color options
Spill-resistant and highly durable to handle sudden accidents
Dedicated media controls to play, pause, skip and adjust in one go
Programmable function keys for custom commands and integrated palm rest and adjustable feet
The HyperX Alloy Core RGB is ideal for hardcore programmers and gamers looking to improve their keyboards’ style and performance without spending much money. The Alloy Core RGB is elegant, beautiful, and reliable, making it a sweeping tech keyboard for programmers.
The Alloy Core RGB is designed to provide stability and reliability for players and programmers who want a keyboard that will last with an enduring, strengthened plastic framework. In addition, the keyboard lock allows you to lock the keyboard without setting your whole system up.
Features:
Signature light Bar and dynamic RGB lighting effects
5 Zones multicolor customization option
Quiet and responsive keys with anti-ghosting
Durable solid frame with spill resistance
Spill-resistant and dedicated media controls
Quick access buttons for brightness, lighting modes, and game mode
Keyboard lock mode and flexible braided cable
Comparison Table
Conclusion
This was the complete list of best keyboards for programming that we think are currently the best in the Indian market. However, many other keyboards can also be included, but we have picked these best keyboards for this article.
As a matter of fact, it depends on the users’ choice and preference for what they find best in their hands. And this list of best programming keyboards will help you to compare and come to a final consensus.
Creating a star rating component is a classic exercise in web development. It has been done and re-done many times using different techniques. We usually need a small amount of JavaScript to pull it together, but what about a CSS-only implementation? Yes, it is possible!
Cool, right? In addition to being CSS-only, the HTML code is nothing but a single element:
<input type="range" min="1" max="5">
An input range element is the perfect candidate here since it allows a user to select a numeric value between two boundaries (the min and max). Our goal is to style that native element and transform it into a star rating component without additional markup or any script! We will also create more components at the end, so follow along.
Note: This article will only focus on the CSS part. While I try my best to consider UI, UX, and accessibility aspects, my component is not perfect. It may have some drawbacks (bugs, accessibility issues, etc), so please use it with caution.
The <input> element
You probably know it but styling native elements such as inputs is a bit tricky due to all the default browser styles and also the different internal structures. If, for example, you inspect the code of an input range you will see a different HTML between Chrome (or Safari, or Edge) and Firefox.
Luckily, we have some common parts that I will rely on. I will target two different elements: the main element (the input itself) and the thumb element (the one you slide with your mouse to update the value).
Our CSS will mainly look like this:
input[type="range"] {
/* styling the main element */
}
input[type="range" i]::-webkit-slider-thumb {
/* styling the thumb for Chrome, Safari and Edge */
}
input[type="range"]::-moz-range-thumb {
/* styling the thumb for Firefox */
}
The only drawback is that we need to repeat the styles of the thumb element twice. Don’t try to do the following:
input[type="range" i]::-webkit-slider-thumb,
input[type="range"]::-moz-range-thumb {
/* styling the thumb */
}
This doesn’t work because the whole selector is invalid. Chrome & Co. don’t understand the ::-moz-* part and Firefox doesn’t understand the ::-webkit-* part. For the sake of simplicity, I will use the following selector for this article:
input[type="range"]::thumb {
/* styling the thumb */
}
But the demo contains the real selectors with the duplicated styles. Enough introduction, let’s start coding!
Styling the main element (the star shape)
We start by defining the size:
input[type="range"] {
--s: 100px; /* control the size*/
height: var(--s);
aspect-ratio: 5;
appearance: none; /* remove the default browser styles */
}
If we consider that each star is placed within a square area, then for a 5-star rating we need a width equal to five times the height, hence the use of aspect-ratio: 5.
That 5 value is also the value defined as the max attribute for the input element.
input[type="range"] {
--s: 100px; /* control the size*/
height: var(--s);
aspect-ratio: attr(max type(<number>));
appearance: none; /* remove the default browser styles */
}
Now you can control the number of stars by simply adjusting the max attribute. This is great because the max attribute is also used by the browser internally, so updating that value will control our implementation as well as the browser’s behavior.
This enhanced version of attr() is only available in Chrome for now so all my demos will contain a fallback to help with unsupported browsers.
The next step is to use a CSS mask to create the stars. We need the shape to repeat five times (or more depending on the max value) so the mask size should be equal to var(--s) var(--s) or var(--s) 100% or simply var(--s) since by default the height will be equal to 100%.
input[type="range"] {
--s: 100px; /* control the size*/
height: var(--s);
aspect-ratio: attr(max type(<number>));
appearance: none; /* remove the default browser styles */
mask-image: /* ... */;
mask-size: var(--s);
}
What about the mask-image property you might ask? I think it’s no surprise that I tell you it will require a few gradients, but it could also be SVG instead. This article is about creating a star-rating component but I would like to keep the star part kind of generic so you can easily replace it with any shape you want. That’s why I say “and more” in the title of this post. We will see later how using the same code structure we can get a variety of different variations.
Here is a demo showing two different implementations for the star. One is using gradients and the other is using an SVG.
In this case, the SVG implementation looks cleaner and the code is also shorter but keep both approaches in your back pocket because a gradient implementation can do a better job in some situations.
Styling the thumb (the selected value)
Let’s now focus on the thumb element. Take the last demo then click the stars and notice the position of the thumb.
The good thing is that the thumb is always within the area of a given star for all the values (from min to max), but the position is different for each star. It would be good if the position is always the same, regardless of the value. Ideally, the thumb should always be at the center of the stars for consistency.
Here is a figure to illustrate the position and how to update it.
The lines are the position of the thumb for each value. On the left, we have the default positions where the thumb goes from the left edge to the right edge of the main element. On the right, if we restrict the position of the thumb to a smaller area by adding some spaces on the sides, we get much better alignment. That space is equal to half the size of one star, or var(--s)/2. We can use padding for this:
It’s better but not perfect because I am not accounting for the thumb size, which means we don’t have true centering. It’s not an issue because I will make the size of the thumb very small with a width equal to 1px.
The thumb is now a thin line placed at the center of the stars. I am using a red color to highlight the position but in reality, I don’t need any color because it will be transparent.
You may think we are still far from the final result but we are almost done! One property is missing to complete the puzzle: border-image.
The border-image property allows us to draw decorations outside an element thanks to its outset feature. For this reason, I made the thumb small and transparent. The coloration will be done using border-image. I will use a gradient with two solid colors as the source:
The above means that we extend the area of the border-image from each side of the element by 100px and the gradient will fill that area. In other words, each color of the gradient will cover half of that area, which is 100px.
Do you see the logic? We created a kind of overflowing coloration on each side of the thumb — a coloration that will logically follow the thumb so each time you click a star it slides into place!
Now instead of 100px let’s use a very big value:
We are getting close! The coloration is filling all the stars but we don’t want it to be in the middle but rather across the entire selected star. For this, we update the gradient a bit and instead of using 50%, we use 50% + var(--s)/2. We add an offset equal to half the width of a star which means the first color will take more space and our star rating component is perfect!
We can still optimize the code a little where instead of defining a height for the thumb, we keep it 0 and we consider the vertical outset of border-image to spread the coloration.
I know that the syntax of border-image is not easy to grasp and I went a bit fast with the explanation. But I have a very detailed article over at Smashing Magazine where I dissect that property with a lot of examples that I invite you to read for a deeper dive into how the property works.
The full code of our component is this:
<input type="range" min="1" max="5">
input[type="range"] {
--s: 100px; /* control the size*/
height: var(--s);
aspect-ratio: attr(max type(<number>));
padding-inline: calc(var(--s) / 2);
box-sizing: border-box;
appearance: none;
mask-image: /* ... */; /* either an SVG or gradients */
mask-size: var(--s);
}
input[type="range"]::thumb {
width: 1px;
border-image:
conic-gradient(at calc(50% + var(--s) / 2), grey 50%, gold 0)
fill 0//var(--s) 500px;
appearance: none;
}
That’s all! A few lines of CSS code and we have a nice rating star component!
Half-Star Rating
What about having a granularity of half a star as a rating? It’s something common and we can do it with the previous code by making a few adjustments.
First, we update the input element to increment in half steps instead of full steps:
<input type="range" min=".5" step=".5" max="5">
By default, the step is equal to 1 but we can update it to .5 (or any value) then we update the min value to .5 as well. On the CSS side, we change the padding from var(--s)/2 to var(--s)/4, and we do the same for the offset inside the gradient.
The difference between the two implementations is a factor of one-half which is also the step value. That means we can use attr() and create a generic code that works for both cases.
input[type="range"] {
--s: 100px; /* control the size*/
--_s: calc(attr(step type(<number>),1) * var(--s) / 2);
height: var(--s);
aspect-ratio: attr(max type(<number>));
padding-inline: var(--_s);
box-sizing: border-box;
appearance: none;
mask-image: ...; /* either an SVG or gradients */
mask-size: var(--s);
}
input[type="range"]::thumb{
width: 1px;
border-image:
conic-gradient(at calc(50% + var(--_s)),gold 50%,grey 0)
fill 0//var(--s) 500px;
appearance: none;
}
Here is a demo where modifying the step is all that you need to do to control the granularity. Don’t forget that you can also control the number of stars using the max attribute.
Using the keyboard to adjust the rating
As you may know, we can adjust the value of an input range slider using a keyboard, so we can control the rating using the keyboard as well. That’s a good thing but there is a caveat. Due to the use of the mask property, we no longer have the default outline that indicates keyboard focus which is an accessibility concern for those who rely on keyboard input.
For a better user experience and to make the component more accessible, it’s good to display an outline on focus. The easiest solution is to add an extra wrapper:
That will have an outline when the input inside has focus:
span:has(:focus-visible) {
outline: 2px solid;
}
Try to use your keyboard in the below example to adjust both ratings:
Another idea is to consider a more complex mask configuration that prevents hiding the outline (or any outside decoration). The trick is to start with the following:
The no-clip keyword means that nothing from the element will be clipped (including outlines). Then we use an exclude composition with another gradient. The exclusion will hide everything inside the element while keeping what is outside visible.
Finally, we add back the mask that creates the star shapes:
I prefer using this last method because it maintains the single-element implementation but maybe your HTML structure allows you to add focus on an upper element and you can keep the mask configuration simple. It totally depends!
As I said earlier, what we are making is more than a star rating component. You can easily update the mask value to use any shape you want.
Here is an example where I am using an SVG of a heart instead of a star.
Why not butterflies?
This time I am using a PNG image as a mask. If you are not comfortable using SVG or gradients you can use a transparent image instead. As long as you have an SVG, a PNG, or gradients, there is no limit on what you can do with this as far as shapes go.
I am not repeating a specific shape in that last example, but am using a complex mask configuration to create a signal shape.
Conclusion
We started with a star rating component and ended with a bunch of cool examples. The title could have been “How to style an input range element” because this is what we did. We upgraded a native component without any script or extra markup, and with only a few lines of CSS.
What about you? Can you think about another fancy component using the same code structure? Share your example in the comment section!