Designing Resilient Backup Architecture: Where to Place Cloud vs. On-Prem Targets. Q&A with Paul Speciale
“At the core should be the 3-2-1 backup rule, adapted for today’s threat environment: a primary on-prem copy for rapid restoration, a secondary copy stored offsite, often in the cloud, for geographic resilience, and a third air-gapped copy to guard against ransomware and malicious compromise.”
Q1: Paul, when an organization is designing its backup target architecture, what are the key questions they should ask themselves first to determine whether cloud, on-prem, or a hybrid approach is the right fit for their specific situation?
Paul Speciale: Good question, Roberto! When designing a backup target architecture, organizations should start with a clear understanding of their environment: current infrastructure, workload characteristics, growth trends, and operational constraints. Obviously a major factor is based on the applications themselves, are they on-prem, cloud or hybrid? Decisions should be requirements-driven, aligned with service levels, risk tolerance, and regulatory obligations, rather than technology preference.
Key factors include projected data growth, scalability, RTO/RPO targets, workload sensitivity (e.g., latency), and network capacity for replication and recovery. A thorough Total Cost of Ownership analysis, including CapEx, OpEx, licensing, staffing, and management overhead, helps ensure long-term efficiency. Considering these factors allows organizations to design a resilient, scalable, and future-proof architecture, whether on-prem, in the cloud, or in a hybrid environment. Tangibly, for on-prem in corporate data centers, many organizations have adopted a standard multi-copy backup methodology consisting of a primary copy on-prem (for fast restore), one copy offsite (often in the cloud), and a 3rd air-gapped copy.
Q2: Cost is often the first thing organizations look at when comparing cloud and on-prem backup targets, but what are the hidden or less obvious factors that can significantly change the equation over time (and how should organizations account for them in their planning?
Paul Speciale: While upfront cost is often the first consideration, long-term economics are influenced by less obvious factors. In cloud deployments, we have two main considerations: speed of restore, and data egress and retrieval fees which can add up during large-scale restores. On-prem solutions require ongoing investment in staffing, patching, capacity planning, and hardware refresh cycles. Cloud elasticity can reduce overprovisioning, while on-prem infrastructure typically involves depreciation and lifecycle management. Compliance, encryption, and governance requirements may also impact operational overhead differently across deployment models. A complete cost assessment should go beyond initial acquisition and consider data growth, scalability, evolving regulatory requirements, and operational efficiency, ensuring a financially sustainable, future-proof backup architecture.
Q3: Data sovereignty, compliance, and security requirements are increasingly shaping infrastructure decisions. How do these considerations influence where organizations should place their backup targets, and are there workloads or industries where on-prem remains the clear choice regardless of cloud economics?
Paul Speciale: Local and international regulatory frameworks increasingly influence backup architecture decisions. Data sovereignty laws may require that certain datasets remain within national borders, while industry-specific compliance standards, particularly in healthcare, financial services, and government sectors, often necessitate enhanced control, traceability, and auditability that favor on-prem deployments. Additionally, highly sensitive or mission-critical workloads may require logical or physical isolation from multi-tenant cloud environments to meet strict security and risk management policies. In such cases, on-prem infrastructure frequently remains the primary strategy. Hybrid architectures, however, can provide a balanced approach by leveraging cloud platforms for less sensitive, secondary, or archival data while maintaining direct control over regulated or high-risk workloads.
Q4: Ransomware and cyber resilience have become top priorities for IT leaders. How does the choice between cloud and on-prem backup targets affect an organization’s ability to recover quickly and reliably from a cyberattack, and what architectural principles should guide that decision?
Paul Speciale: Cyber resilience is fundamentally shaped by backup architecture design. The overriding concern now is recoverability, which requires clear knowledge of the last known good backup, and restore performance. For on-prem applications, clearly a primary backup copy co-located on-prem can provide high-performance access and a high degree of operational control, enabling rapid restores and the ability to implement logically or physically isolated protection domains. Cloud-based storage, by contrast, delivers geographic diversity, built-in durability, versioning, and object immutability, though recovery performance is inherently dependent on available network bandwidth and data transfer constraints. Effective cyber resilience strategies are anchored in several core principles: immutable backup copies to prevent tampering, isolation through air-gapped or off-network repositories to mitigate lateral attack propagation, redundancy across local and cloud tiers to eliminate single points of failure, and automated validation through routine restore testing to ensure recoverability under real-world conditions. By designing a hybrid architecture with policy-driven, immutable backups, organizations gain predictable, rapid recoverability on-prem while leveraging the geographic separation and durability of the cloud, ensuring cyber resilience against both targeted attacks and large-scale disruptions.
Q5: Many organizations find themselves managing a mix of cloud and on-prem backup targets that evolved organically over time rather than by design. What practical advice would you give to IT leaders looking to rationalize and optimize their backup target architecture without disrupting their existing operations?
Paul Speciale: Organizations with organically developed backup environments can unlock meaningful optimization without disrupting operations by establishing comprehensive visibility across their data protection landscape. As mentioned earlier, at the core should be the 3-2-1 backup rule, adapted for today’s threat environment: a primary on-prem copy for rapid restoration, a secondary copy stored offsite, often in the cloud, for geographic resilience, and a third air-gapped copy to guard against ransomware and malicious compromise. Creating a clear inventory of backup targets, protected workloads, data classifications, retention policies, and service-level requirements provides the foundation for rationalization and risk alignment. By combining disciplined 3-2-1 execution with enhanced visibility, tiered protection, and automation, organizations can simplify hybrid environments, reduce operational risk, and materially strengthen cyber resilience while maintaining day-to-day operational stability.
……………………………………………

Paul Speciale, CMO Scality
Over 20 years of experience in Technology Marketing & Product Management. Key member of team at four high-profile startup companies and two fortune 500 companies.
Sponsored by Scality