public_html permissions: Secure defaults and best practices for web hosting

public_html permissions: Secure defaults and best practices for web hosting
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors

Table of Contents

    What the public_html folder is and how it maps to what visitors see

    What the public_html folder is and how it maps to what visitors see

    1. public_html as the document root web root for a primary domain

    From a hosting platform’s point of view, public_html is less a “magic folder” and more a contract: the web server takes an incoming request and translates its URL path into a filesystem path under a configured document root. In many cPanel environments, that document root is the account’s public_html directory, and the only files a browser can fetch are the ones the web server can map into that tree.

    At TechTide Solutions, we treat this mapping as part of the security boundary, not just a convenience. A secure permission model starts with the assumption that anything placed under the document root is potentially retrievable, cacheable, and indexable—sometimes by bots you didn’t invite. That’s why we push teams to separate “web assets” (templates, compiled front-end bundles, images) from “web-adjacent assets” (configuration, keys, backups, exports).

    Market reality reinforces that stance: security is no longer a niche line item, with worldwide end-user spending projected to reach $213 billion in 2025, which is a polite way of saying that the average business has learned (sometimes the hard way) that defaults matter.

    2. Default homepage filenames inside public_html and how web servers pick them

    When a visitor requests a directory (for example, the site root with a trailing slash), the server needs a rule for “which file represents this directory.” On Apache, that behavior is explicitly controlled by DirectoryIndex picks the first matching index resource in order, which is why a simple upload of a CMS can suddenly change what loads at “/” without any DNS or routing changes.

    Operationally, that directory-to-file resolution creates a subtle permissions pitfall: it’s not enough for the target index file to be readable; every parent directory in the path must also be traversable by the server process. In audits, we often see teams correctly set file readability but accidentally lock directory traversal, then spend hours debugging a “site is down” incident that is really just an execute bit problem on a parent directory.

    Practically speaking, we recommend treating “index resolution” as part of your deployment contract. A build pipeline should know which index file is expected, ensure it exists, and avoid leaving behind alternate index files from previous releases that might be more permissive, less hardened, or simply outdated.

    3. Addon domains and subdomains stored as subfolders under public_html

    Shared hosting control panels frequently encourage a “domains as subfolders” layout because it’s easy to visualize and easy to back up. In cPanel specifically, the common pattern is that addon and subdomain document roots are placed beneath the primary site’s directory tree unless an administrator deliberately changes the policy. In fact, cPanel documents that addon and subdomain document roots are restricted to public_html by default unless a WHM setting is disabled, which explains why many accounts end up with a dense forest of sibling sites under one top-level web root.

    Security-wise, that default has consequences. A permissive setting intended for “site A” can accidentally become permissive for “site B” if they share ancestor directories or if the wrong group ownership leaks across multiple doc roots. The risk is not theoretical: a vulnerable plugin in an older addon domain can become a pivot point into shared writable directories, turning one compromised site into a compromise multiplier.

    Our rule of thumb is simple: if you must host multiple sites in one account, treat each doc root as its own compartment. Separate writeable directories per site, avoid shared “uploads” across domains, and make sure no site can write into another site’s application code.

    How Unix file permissions affect public_html access

    How Unix file permissions affect public_html access

    1. Read, write, execute meanings for directories vs files in web serving

    Unix permissions are deceptively simple until you apply them to directories. With files, “read” means the process can view contents, “write” means it can modify contents, and “execute” means it can run the file as a program (assuming the file is executable content). With directories, the meanings shift: “read” is the ability to list directory contents, “write” is the ability to create or delete entries, and “execute” is the ability to traverse the directory as part of a path lookup.

    In web hosting, that directory “execute” bit is the quiet kingmaker. A browser doesn’t need to list your directories for damage to occur; it just needs the server process to be able to traverse into them. Conversely, the server can often serve a known file path even when directory listing is blocked, as long as traversal is allowed and the file itself is readable.

    To keep the mental model consistent, we teach teams to speak in verbs: files are “readable” and “writable,” while directories are “traversable” and “listable.” That vocabulary shift reduces mistakes when debugging access issues that feel random but are usually just permission semantics doing exactly what they were designed to do.

    2. Why visitors are not reading files directly and what the web server user actually needs

    Browsers do not read your filesystem. The only actor reading your files is the web server process (and any application runtimes it spawns), which asks the kernel for permission like any other process. That distinction matters because “public access” is not a special category in Unix; it’s a side effect of whether the server’s effective user and group can traverse directories and read files.

    On shared hosting, the server might run as a generic user, or it might hand off script execution to per-account handlers. The consequences are huge: in one model, static assets and scripts are accessed by the same generic process identity; in another model, scripts execute as the account owner while static files are still served by the web server identity. Each model changes what the minimum safe permissions are.

    When we design security baselines, we start by identifying the true readers and writers. After that, permissions become an engineering exercise rather than superstition: grant traversal and read where necessary for serving, and reserve write access for the narrowest set of processes that must mutate state (uploads, caches, session stores, and application data).

    3. Ownership and group strategy for letting the web server read without letting everyone edit

    Ownership is the second half of the permissions story, and it’s where most “secure defaults” succeed or fail. In the healthiest setups, application code is owned by the account user, the web server can read it (either via group membership or via “other” permissions), and only the account user (or a deployment user acting as that account) can write to it.

    Group strategy is the lever that makes shared hosting work at scale. Some hosts configure the document root so the server process’s group can read while “other” is restricted, which effectively blocks cross-account snooping without breaking anonymous web access. Meanwhile, the most dangerous anti-pattern is shared group write access, because it turns “any process in this group” into a potential editor of your application code.

    At TechTide Solutions, we prefer a boring principle: code should not be mutable by the web server at runtime. Upload directories and cache directories may need special treatment, but the core application tree should be as close to read-only as the hosting model allows, with deployments handled by controlled tooling rather than by the running web process.

    Default public_html permissions on cPanel-based hosting and shared servers

    Default public_html permissions on cPanel-based hosting and shared servers

    1. Common defaults for public_html, subdirectories, and files

    On many cPanel-based shared servers, you’ll see a consistent baseline: the top-level document root is more restrictive than its subdirectories, and regular content files are readable by the server but not writable by the world. Even when the precise modes vary by provider, the pattern usually reflects the same intent: allow the site to be served, limit cross-account visibility, and discourage world-writable shortcuts.

    Host-level hardening features can also rewrite what “default” means over time. cPanel notes that EasyApache’s FileProtect aims to secure each user’s document root by adjusting permissions and ownership, which is why a permissions change that “worked yesterday” can be silently reverted after a system update or a security policy refresh.

    In practice, that automation is a gift and a curse. Good automation prevents insecure drift; brittle automation surprises developers who were relying on permissive behavior. Our advice is to treat the host’s baseline as a platform contract and avoid fighting it—especially on shared hosting where the host’s security model is designed to protect multiple tenants.

    2. Why public_html is often 750 and how group membership allows serving to the public

    Some providers lock down the document root so it is not world-traversable, then grant access to a specific group that the web server process belongs to. The elegant part of this pattern is that it allows anonymous visitors to fetch pages without granting every other local user on the same machine the ability to explore your directory tree.

    Conceptually, this is the same idea you see in web server suEXEC models, where the server can access doc roots based on controlled identity rather than on broad “other” permissions. LiteSpeed’s documentation even illustrates this group-based approach in its security configuration guidance, showing how shared hosting can rely on group identity to serve content while limiting lateral access between users, as described in a shared hosting example using suEXEC mode and group-based docroot access.

    From our perspective, the key is not the exact mode but the intent: public reachability should come from the web server’s controlled identity, not from making your web root readable and writable by everyone who happens to share the machine.

    3. How server configuration changes the defaults including user ownership models

    Configuration choices like the PHP handler, the MPM model, and whether per-user execution is enabled can change the safest baseline dramatically. Under some handlers, scripts run as a generic server identity; under others, scripts run as the vhost owner. That difference determines whether “only the owner can read” is viable or whether it will break the site.

    cPanel’s own handler documentation spells out that PHP execution identity depends on the handler and whether suEXEC (or similar per-user modules) is installed, which is a polite reminder that permissions are never purely a filesystem story—they are a filesystem-plus-runtime story.

    When we inherit an existing hosting environment, we always start by mapping the execution model: who serves static files, who executes scripts, and who writes application data. Only after that mapping do we bless a permission baseline, because copying “best practices” from a different execution model is how well-meaning teams accidentally create outages or security gaps.

    Recommended baseline public_html permissions for most websites

    1. Directories: keeping traversal and access predictable with 0755

    For many conventional sites, the baseline directory posture is “owner can manage; everyone else can traverse.” In human terms, that means your deployment user can create, delete, and reorganize directories, while the server and anonymous visitors can access paths to fetch known files. This is the default many teams recognize because it “just works” across a wide range of hosting stacks.

    Still, we rarely frame it as “use this octal mode everywhere,” because the rule is not truly about a number. The rule is about predictability: every directory on the path to a public asset must be traversable by the serving identity, and write access should not be granted casually to groups that include runtime processes or unrelated users.

    In real projects, we treat directory permissions as part of the deploy artifact. A static export of a marketing site, a WordPress theme, or a Laravel public directory all share the same basic need: the server must traverse and read, but it should not be able to rewrite your code by accident or by exploitation.

    2. Files: using 0644 for content and when 0755 is appropriate for executable scripts

    Files follow the same philosophy: read access is needed for serving, while write access should be limited to the owner. For most web content—HTML, CSS, images, compiled bundles, and server-side source files—the server needs read access, but it does not need permission to modify the file.

    Executable scripts are a different category. Traditional CGI binaries or scripts invoked directly by the server must be runnable by the identity that executes them. Even then, “executable” does not mean “world editable.” A safe pattern is to make executable content runnable while still keeping it non-writable to the group and the world.

    We also draw a hard line between “executed by an interpreter” and “executed by the kernel.” A PHP file typically should not need the execute bit at all; it is read by the PHP runtime and interpreted. Mixing those two mental models is how teams end up with confusing security postures and server errors that vanish only when someone makes everything permissive.

    3. Avoiding 0777 and using safer alternatives when something needs write access

    Wide-open permissions are the duct tape of shared hosting: they appear to fix the immediate problem, and they quietly create a larger one. A world-writable directory inside a web root is an invitation for abuse, because any process that can reach it can potentially drop executable content, overwrite assets, or poison caches.

    Instead of that shortcut, we advocate a toolbox of safer alternatives. One option is to isolate write access to a single uploads directory and enforce file-type and extension rules at the application layer. Another option is to relocate writable paths outside the document root and serve them through controlled endpoints. A third approach is to rely on per-user runtime models so that the only writer is the account owner identity.

    SuPHP-style environments are especially unforgiving here: many configurations simply refuse to execute code when it is writable by group or others, which is both a security feature and a troubleshooting clue. Liquid Web summarizes this behavior succinctly, noting that SuPHP enforces strict permission requirements and may refuse execution when files or directories are too permissive.

    Least-privilege public_html permissions and hardened setups

    Least-privilege public_html permissions and hardened setups

    1. Per-user PHP handlers and when 0700 directories and 0600 files can work

    In hardened setups, we sometimes aim for “owner-only by default,” where directories and files are readable solely by the account user. That posture can work when the runtime that reads the code is the same identity that owns it—typical of well-configured per-user execution models. Under those models, the web server routes requests, but the application runtime reads source files as the account user, not as a shared system identity.

    However, the approach has limits. Static assets are often still served by the web server process itself, and that process may not have access to owner-only content. A hardened strategy therefore usually combines two trees: a tightly locked application tree and a public-facing assets tree with broader read permission.

    When we implement this split, we make it explicit in the repository layout and in deployment automation. Put differently, least privilege is easiest when the directory structure reflects it: “private application” and “public assets” become separate, enforceable concerns rather than an informal convention.

    2. When Apache still needs world-readable assets and why 0644 remains common

    Even in sophisticated hosting environments, broad readability for static files remains common because it’s compatible with nearly every serving model. Apache (or another server) can serve images, stylesheets, and scripts without needing membership in per-account groups or special ACL rules, and it can do so efficiently.

    That said, “world-readable” is not automatically “world-exposed.” Exposure still depends on document root mapping, rewrite rules, and whether the file sits under a path that can be requested. The defensive move is therefore twofold: keep permissions reasonable, and keep sensitive files out of the web root entirely.

    In our client work, we’re pragmatic: if the host’s model requires broad read access, we accept it and harden elsewhere. Secrets go outside the web root, uploads are constrained, directory indexing is disabled, and execution permissions are tightly controlled. That layered posture reduces risk without forcing a fragile permission scheme onto a platform that will not support it.

    3. More restrictive patterns: 0711 or 0701 directory traversal and locked-down file modes

    Some environments use an interesting middle ground: directories are traversable but not listable, meaning you can access a known path but you cannot enumerate the directory contents. In theory, that reduces accidental disclosure when a misconfiguration enables directory browsing, and it also limits what other local users can learn if they have shell access on the same server.

    In practice, this pattern is most common above the document root (for example, the account’s home directory), while the public web tree remains more readable to avoid breaking asset serving. Notably, cPanel’s FileProtect behavior includes home-directory hardening and document-root adjustments, and the same guidance about automated permission and ownership enforcement is a reminder that restrictive traversal patterns are often part of a host’s standardized security stance.

    Our viewpoint is conservative: traversal-only directories can be useful, but they’re not a substitute for correct application-layer access controls. If a file must never be served, permissions alone are rarely the best guarantee; placement outside the document root and explicit deny rules are more dependable.

    CGI, PHP execution, and write-permission rules that can trigger server errors

    CGI, PHP execution, and write-permission rules that can trigger server errors

    1. Preventing group and other write permissions on key directories and executables

    On servers that enable suEXEC-like models, “too writable” is not merely risky—it can be treated as an execution blocker. The rationale is straightforward: if a script or its directory is writable by parties beyond the trusted owner, an attacker could modify it and gain code execution through the web server’s normal behavior.

    Apache’s own documentation explains that suEXEC exists to run CGI programs under different user IDs and includes a strict security model, and those security checks are the root cause behind many “it works locally but not on the host” incidents.

    In our troubleshooting playbook, permission errors are rarely solved by adding more permissions. Instead, we remove unsafe write access, fix ownership, and isolate writable directories. That discipline prevents a short-term fix from becoming a long-term vulnerability, especially on shared servers where one compromised process can otherwise tamper with shared runtime surfaces.

    2. CGI and PHP programs run as the account owner and what that means for permissions

    The most important practical implication of per-user execution is that the runtime no longer needs broad write access to make the site function. Uploads can be owned and written by the account user identity, caches can be owned by that identity, and application code can remain non-writable at runtime.

    Yet that same model also exposes weak application behavior faster. If a script tries to write logs into its own code directory, for example, it may fail under a least-privilege scheme. Rather than relaxing permissions, we prefer to redirect writes into dedicated writable locations and treat that separation as part of application architecture.

    cPanel’s handler documentation reinforces why this works: handlers and per-user modules determine whether scripts execute as the virtual host owner or as a shared system user, which means “the right permissions” are inseparable from “the configured execution identity.”

    3. Internal Server Error and other symptoms tied to incorrect execute or write bits

    When permissions are wrong, the browser feedback is often unhelpful: a generic internal server error, a blank page, or a forbidden response that doesn’t mention the real cause. Under the hood, the error log usually tells the truth: a directory isn’t traversable, a script isn’t executable, a file is writable by an unsafe class of users, or ownership doesn’t match the configured policy.

    From our experience, the fastest diagnostic path is to follow the server’s identity chain. First, confirm which process is serving the request (web server, PHP handler, FastCGI wrapper). Next, verify traversal permissions on every parent directory in the file path. Finally, validate that “write access exists only where the application must write,” because that is the most common reason hardened handlers refuse to execute.

    Ironically, the easiest mistakes are made during frantic incident response. A panicked chmod that makes everything writable can temporarily mask the real issue while planting the seeds for a later compromise. A calmer approach—fix ownership, remove unsafe writes, and confirm traversal—is slower in the moment but faster in the long run.

    Access control inside public_html beyond chmod

    Access control inside public_html beyond chmod

    1. Blocking direct access to sensitive scripts with per-file and per-directory rules

    Permissions decide what the server can read; access control decides what the server will serve. That distinction is crucial because many sensitive artifacts must remain readable to the runtime while still being unservable as direct web responses. Think of templates, internal include files, diagnostic scripts, or admin entry points that should only be reachable behind authentication.

    On Apache, a common approach is to deny access in a directory context or per-file context. The core authorization module documents that Require all denied can block access unconditionally, which is blunt but effective for directories that should never be served directly, such as internal configuration folders that must remain on disk for includes.

    At TechTide Solutions, we also like the “default deny” posture for high-risk directories: explicitly grant access to public entry points and deny everything else. That strategy pairs well with modern frameworks that route requests through a single front controller, because it narrows the exposed surface area without relying on developers to remember which internal file might become requestable later.

    2. Disabling directory listings and handling directories without an index file

    Directory listing is one of those features that feels helpful until it becomes an incident. If a directory lacks an index file and autoindexing is enabled, the server may generate a listing of files—sometimes including backups, exports, or build artifacts that were never meant to be browsed.

    Apache’s documentation is explicit that if no DirectoryIndex resource is found, the server may generate a listing when indexing is enabled, and the directory listing behavior is implemented by the autoindex module. The autoindex module’s options are described in the module documentation for directory indexing and listing behavior.

    Operationally, we treat “no index file” as an expected condition rather than a configuration error. For directories that should never be browsed, deny rules are clearer than relying on the presence of an empty placeholder file. For directories that should be browsed (rare), we generate a controlled listing view inside the application instead of exposing raw filesystem listings.

    3. Keeping secrets out of public_html and restricting access to required external files

    When teams ask us how to secure public_html, we usually answer with a question: “Why is that secret inside the web root at all?” Permissions can reduce risk, but location is a stronger control. A database credential stored outside the document root is simply harder to leak through misrouting, misconfiguration, or accidental static serving.

    Modern guidance is clear that secrets management should be treated as a lifecycle discipline, not a one-time placement decision. OWASP’s guidance emphasizes process and tooling around secrets, including safe injection into deployment systems, as described in best practices for secrets management across environments and pipelines.

    In shared hosting, “outside public_html” often means “in the account home directory” with careful traversal rules, while application code includes that file by path. In managed deployments, it can mean environment variables, vault-backed injection, or runtime configuration stores. Either way, our stance is consistent: if it would hurt to leak, do not place it where the web server might serve it by accident.

    How to set, fix, and maintain public_html permissions safely

    How to set, fix, and maintain public_html permissions safely

    1. Using cPanel File Manager to locate public_html and verify permission values

    For many teams, cPanel’s File Manager is the only interface they have, and that’s fine as long as the workflow is deliberate. The practical routine is: locate the document root, inspect directory permissions up the tree, confirm ownership, and only then change a setting. Randomly “making it work” in the UI is the fastest way to create brittle, inconsistent security.

    cPanel’s documentation shows that the File Manager includes a permissions editor for files and folders, which is useful for spot fixes—especially when you’re diagnosing a single blocked script or a single directory that is not traversable.

    In our experience, UI fixes are best treated as temporary. Once you find the correct state, capture it in a repeatable process: deployment scripts, a post-deploy permission audit, or at least a checklist that prevents the same incident from recurring after the next update.

    2. Using FTP clients to set folder and file permissions consistently

    FTP and SFTP clients are often used as a “deployment tool,” even though they were never designed to be one. That reality creates a security challenge: a client configured incorrectly can upload files with permissive modes, wrong ownership expectations, or inconsistent directory flags. Worse, repeated manual uploads can create a patchwork permission model that looks fine until a handler rejects it or an attacker finds the weakest directory.

    Our preferred approach is to stop treating FTP as deployment and start treating it as a transport layer for a controlled artifact. Upload a release bundle, expand it in a controlled way, and then normalize permissions as a final step. That normalization is the key, because it turns “whatever my client did” into “what our baseline requires.”

    Even when teams must use FTP for operational reasons, consistency is achievable. A simple practice—always reapplying a known-good permission policy after uploads—prevents the slow drift that eventually results in a broken site or an avoidable exposure.

    3. Command-line repair workflows using chmod and find for recursive permission resets

    Command-line workflows are the fastest way to restore consistency, as long as they’re executed with care. A common pattern is to use find to target directories separately from regular files, then apply a symbolic-mode policy that expresses intent without forcing you to memorize octal values.

    The GNU documentation explains that symbolic modes let you describe permission changes using u/g/o/a and rwx flags, which we like because it reads like a policy. For example, “directories should be traversable by group and others, files should be readable, and only the owner should write” is clearer in symbolic form than in a numeric shorthand.

    A Safe, Expressive Reset Pattern

    # From inside the document root directory:# 1) Normalize directories to owner full access, others traverse+readfind . -type d -exec chmod u=rwx,go=rx {} \;# 2) Normalize regular files to owner read/write, others readfind . -type f -exec chmod u=rw,go=r {} \;

    After that baseline reset, we selectively loosen permissions only where required (uploads, caches) and only in the narrowest scope possible. The guiding principle stays constant: normalize broadly, then carve exceptions narrowly.

    TechTide Solutions: building secure-by-default custom solutions for real hosting environments

    TechTide Solutions: building secure-by-default custom solutions for real hosting environments

    1. Custom web applications and deployment pipelines that enforce correct public_html permissions

    In our consulting work, most permission problems are not created by ignorance; they’re created by workflow. A developer deploys late at night, an urgent hotfix is uploaded manually, a plugin update writes files unexpectedly, and the permission model becomes “whatever happened last.” That’s why we push security into pipelines rather than relying on memory.

    At TechTide Solutions, we build deployment flows that treat permissions as part of the release artifact: after code is synced, a normalization step sets directory and file policies, then an exception step grants write access only to designated runtime directories. Finally, an audit step verifies the result and fails the deployment if unsafe patterns are detected (for example, runtime-writable application code directories).

    For shared hosting and cPanel environments, that often means packaging applications so the public tree is predictable, writable paths are isolated, and the final state matches the host’s handler expectations. In plain language: we make “secure-by-default” the path of least resistance.

    2. Tailored access-control implementations when permissions alone are not enough

    File permissions are necessary, but they are not sufficient. A readable file inside public_html is still readable, even if you “promise not to link to it.” Access control is where we turn promises into enforceable policy: deny rules for sensitive paths, authentication gates for admin routes, and safe defaults for directories that must exist but must not be browsed.

    On Apache-heavy hosting, we often pair permission baselines with carefully scoped distributed configuration, but we also respect performance and maintainability. Apache itself notes that directives in .htaccess files are better placed in main configuration where possible for performance, which is why we prefer server-level config when we have that access, and minimal per-directory config when we don’t.

    Beyond Apache, application-layer controls matter just as much. Token-scoped upload endpoints, signed URLs, strict content-type validation, and server-side authorization checks are the difference between “the filesystem is locked down” and “the business is actually protected.”

    3. Maintenance, audits, and troubleshooting automation for shared hosting and cPanel stacks

    Maintenance is where secure setups either stay secure or quietly decay. Shared hosting stacks change over time—handlers get switched, security features are enabled, and defaults are hardened. Without an audit loop, teams often learn about those shifts only after a site breaks.

    Our approach is to automate “permission drift detection.” A scheduled check walks the document root, flags unexpected writable directories, detects executable flags in places they don’t belong, and verifies that writable directories match an allowlist. In incident response, the same tooling provides a fast answer to the most important question: “What changed?”

    When troubleshooting, we also prioritize evidence over guesswork. Server logs, handler configuration, ownership checks, and traversal validation beat trial-and-error changes every time. Done well, the result is not just fewer outages; it’s a hosting environment where security posture can be explained, defended, and reproduced.

    Conclusion: choosing the right public_html permissions for your server model

    Conclusion: choosing the right public_html permissions for your server model

    1. Start from your host’s default permissions and adjust only to the minimum required

    Secure permissions are not a universal recipe; they’re a negotiation with your execution model. Start with what your host expects, then tighten only where the platform can support it. When teams fight the platform by forcing an incompatible permission scheme, the usual outcome is either downtime or a panicked rollback to permissive settings.

    From our vantage point, the safest path is incremental: accept the host baseline, isolate writable paths, remove write access from code, and keep anything sensitive out of the web root. That combination yields meaningful risk reduction without requiring heroics or fragile hacks.

    2. Validate by testing real requests and confirming ownership, groups, and traversal rules

    Validation must be done the way attackers and users interact with the site: through real requests. Check that public pages load, uploads work, caches behave, and admin actions succeed without granting broad write access. Then confirm the underlying mechanics: ownership is correct, group strategy matches the server model, and traversal rules are consistent from the filesystem root down to the deepest served asset.

    In our audits, the biggest wins come from this exact loop: test externally, verify internally, and only then declare a permission baseline “done.” Skipping the verification step is how silent vulnerabilities survive for months, waiting for the wrong bot, the wrong plugin exploit, or the wrong misconfiguration.

    3. Use least privilege plus targeted access controls to reduce risk without breaking the site

    Least privilege is the destination, but targeted access controls are the bridge that gets you there safely. Permissions should prevent unauthorized modification, while server rules and application controls prevent unintended exposure. Together, they form a layered defense that makes “oops” moments survivable and makes deliberate attacks harder to scale.

    So what’s the next step? If we were sitting with you in a war room, we’d ask you to pick one site, identify its true runtime writers, isolate those directories, and then lock down everything else—would you rather start with a quick permission-and-path audit, or jump straight into automating enforcement in your deployment pipeline?