Hyperscale data centre — 18-month build
78 weeks · site selection → commissioning
0week 0 → 7878
Site & permits
Civil + structural
Shell & core
MEP install
Integration & test
Commissioning (L1–L5)
Interactive timelineHover to replay

Every data centre construction timeline published by a vendor or EPC firm has the same problem: it treats the 18-month schedule as the default and leaves out the two years of site selection, permitting, grid-interconnection study, and long-lead equipment procurement that make the 18 months possible. The real answer is that a hyperscale data centre takes somewhere between 18 and 36 months of physical construction, preceded by a 6-to-18-month permitting process, preceded by grid interconnection that in some markets runs to multiple years. The 18-month figure is real — it’s what accelerated modular builds with pre-secured power and standardised designs actually deliver — but it’s not the average, and presenting it as such has produced a lot of late projects.

This article is the realistic 18-month phased Gantt for an accelerated hyperscale build, with honest markers for where the schedule slips and why. The target reader is a data centre construction PM, a hyperscaler site selection lead, or a specialist GC project executive. If you’ve worked on one of these projects, most of this will be familiar. If you’re planning your first, the phases that look compressed are compressed for a reason — and the sequencing decisions made in Phase 1 determine whether the 18-month target is achievable or aspirational.

The US data centre construction market in 2026

Context first, because it shapes every scheduling decision downstream.

The five largest US hyperscalers (Amazon, Alphabet, Meta, Microsoft, Oracle) have collectively guided to approximately $660–690 billion in 2026 capital expenditure, up about 70% from 2025. Roughly three-quarters of that — around $450 billion — is tied to AI infrastructure, with the bulk going into data centre construction, power procurement, and the IT equipment that fills the facilities. Amazon alone is at $200 billion; Google at $175–185 billion; Meta at $115–135 billion; Microsoft tracking toward $120 billion or more; Oracle at $50 billion. The Stargate joint venture adds another layer, targeting $500 billion in AI infrastructure investment by 2029 across five US sites totalling roughly 7 GW of capacity.

What this means for schedulers: demand is not the constraint. The constraints are physical and logistical. Power transformers have been running at an average 128-week lead time into 2025. Large switchgear is around 44 weeks. Grid interconnection queues are years long in the fastest-growing metros. Skilled electricians — especially those qualified for precision MV wiring — are in the hardest labour shortage the industry has seen, despite ABC’s January 2026 forecast of softer overall construction hiring demand. The binding constraints on 2026 data centre schedules are not steel and concrete; they are electrons and qualified MEP trades.

The 18-month schedule below assumes you have solved those upstream problems — that power is secured, permits are in hand, long-lead equipment is ordered. If you haven’t, read this article as the “what comes after” part of a project that won’t actually start for another year.

The 18-month timeline at a glance

An accelerated hyperscale build with modular electrical rooms, pre-fabricated shell components, and long-lead equipment already on order breaks into six phases:

PhaseTimelineKey outcome
1. Site selection and entitlementsMonths 1–4 (often pre-started)Land control, initial permits, grid capacity confirmation
2. Design and permittingMonths 3–8Final design, full permit package, construction drawings
3. Site preparation and civilMonths 6–10Grading, foundations, utilities stubbed, site access
4. Shell and coreMonths 9–14Weather-tight enclosure, structural steel, roofing
5. MEP rough-in and fit-outMonths 12–16Electrical rooms energised, cooling installed, interiors
6. CommissioningMonths 16–18L1–L5 testing, handover, operational readiness

Phases overlap substantially. The schedule only works because Phase 3 starts before Phase 2 is complete, Phase 5 starts before Phase 4 finishes, and commissioning begins while construction is still in progress. The Gantt visual in this article shows those overlaps explicitly — linear sequencing would add six to nine months to the total, which is why cheaper non-modular designs take 24 to 36 months.

Phase 1: Site selection and entitlements (months 1–4)

Practically speaking, Phase 1 often starts before “month 1” on the construction schedule. Hyperscalers and major colocation developers maintain rolling site-selection pipelines that identify suitable parcels 2–5 years before construction start. When a development goes active, the site has often been under option for 12–18 months already.

What actually happens in this phase:

  • Power capacity confirmation. Does the substation have available headroom? If not, what’s the timeline for the upgrade, and who pays? Many 2026 deals are contingent on utility-funded upgrades that may or may not materialise in time.
  • Grid interconnection study. In PJM, CAISO, and ERCOT, the study processes alone run 12–24 months. Sites that can enter interconnection queues with prior-phase studies complete have a massive scheduling advantage over greenfield sites. See Solar + Battery Storage Project Timeline for US Utility Developers for the interconnection detail.
  • Zoning and entitlements. Increasingly contested in 2025–2026. Maine’s recent large-data-centre ban isn’t an isolated event; New England and the Pacific Northwest have seen growing local opposition. Expect 2–3 additional state-level moratoria through 2026.
  • Environmental review. Typically 3–9 months depending on site and jurisdiction. Federal wetlands findings and NEPA triggers can add 6–12 months.
  • Water availability. Often the invisible constraint in cooling-intensive designs. Many 2026 projects are being redesigned for water-free cooling specifically because the local water utility can’t commit to the draw.

Common slippage points: interconnection study delays, unexpected environmental findings, and zoning fights. Budget 6 months of schedule risk even on sites that look clean in preliminary due diligence.

Phase 2: Design and permitting (months 3–8)

Design and permitting run in parallel with the back half of Phase 1. This is where the accelerated schedule’s modular and standardised-design strategy pays back. Hyperscalers have invested heavily in design libraries that are essentially repeated across sites with minor variations — the same MEP room kit, the same shell module, the same cooling topology. Colocation builders increasingly do the same.

Key activities:

  • Permit package compilation. Building permits, electrical permits, fire suppression, mechanical, plumbing, low-voltage, and any specialist permits (hazardous materials storage for battery rooms, generator fuel, and so on).
  • Construction drawings and specifications. The 90% and 100% drawing sets that contractors bid from.
  • Long-lead procurement confirmation. Transformers ordered in Phase 1 get confirmed delivery dates. If the utility scope expands after detailed studies complete, substation equipment orders may need to expand — with corresponding lead-time impacts.
  • Bid packages and contractor selection. Typically 2–4 weeks each for major trade packages, concurrent with permitting.
  • Utility coordination. Interconnection agreement execution, metering arrangements, backup-generation interfaces, and any on-site substation scope.

Common slippage points: permit reviewer backlogs in boom markets (Northern Virginia, Phoenix, Dallas metros), AHJ requests for design changes after initial submittal, and utility interconnection agreement negotiations that extend beyond the optimistic schedule. Northern Virginia in particular has seen permit review times double since 2023 as local agencies struggle with volume.

Phase 3: Site preparation and civil (months 6–10)

Physical construction starts. Site work typically begins as soon as initial permits (site grading, stormwater, utility stubs) are in hand, even if final building permits aren’t complete.

Key activities:

  • Clearing and grading. For a typical 200,000 sq ft hyperscale hall, this is 2–4 weeks of earthwork.
  • Stormwater management. Ponds, detention, bioswales — often 30–40% of the site’s civil scope by cost.
  • Foundations. Data centre foundations are unusually heavy due to equipment floor loads. Deep piling is common. Precision on level is critical — equipment installations in Phase 5 depend on millimetre-grade slab tolerances.
  • Utility stubs. Electrical, water, sewer, gas (where applicable), fibre.
  • Site access and logistics. Haul roads, crane pads, laydown areas, security fencing. Temporary power from diesel generators typically energised here for construction loads.
  • Substation civil work where on-site substation is in scope, which on most hyperscale builds it is.

Common slippage points: weather (winter in northern climates can add 4–8 weeks), unexpected soil conditions requiring additional engineered fill or deep piling, and coordination with utility-provided scope where the schedule depends on a utility contractor meeting a commitment they may not actually be tracking tightly.

Phase 4: Shell and core (months 9–14)

The building itself. This is the phase that looks like traditional construction — structural steel, roof, exterior walls, exterior doors. It goes fast on hyperscale projects because the designs are standardised and the contractors building them do it repeatedly.

Key activities:

  • Structural steel erection. For a typical hyperscale hall, 6–10 weeks. The sequence of steel and concrete defines the critical path through this phase.
  • Roofing. Weather-tight typically at month 11–12. This milestone matters because MEP interior work can’t begin in earnest until the building is weather-tight.
  • Exterior envelope. Metal panels, glazing where applicable, doors, loading docks.
  • Interior slab-on-deck. Raised floors are common but not universal; modern designs increasingly use slab-on-grade with cable trays above.
  • Fire protection rough-in. Sprinklers or pre-action systems, depending on design. Water-based systems require earlier piping work than gaseous suppression.

This phase is typically the most “normal” in the schedule. Slippage, when it happens, tends to be weather-driven or steel-delivery-driven rather than design-driven.

Phase 5: MEP rough-in and fit-out (months 12–16)

The critical phase. MEP work is where accelerated schedules actually succeed or fail, and it’s the phase that defines the real critical path on most data centre builds.

The MEP work here is specialist enough and important enough that it deserves its own article: Hyperscale Data Center MEP Coordination: Critical Path Gantt. What follows here is the summary-level view.

Key activities:

  • Electrical rooms. Switchgear, UPS, PDUs, batteries, transformers all arrive and install during this phase. Coordination with the utility’s substation energisation is critical. Modular electrical rooms delivered pre-assembled to site speed this up dramatically.
  • Mechanical. Chillers, CRAH or CRAC units (or CDUs and cooling distribution for liquid-cooled halls), piping, pumps, air handlers.
  • Fire suppression final. Pre-action piping, suppression agent storage, detection systems.
  • Low-voltage and BMS. Building management systems, DCIM infrastructure, security, access control.
  • Interior finishes. Raised flooring (if used), ceilings, walls, wire managers, cable basket systems.

The MEP critical path through this phase is typically dominated by switchgear and transformer readiness. A switchgear lineup that arrives two weeks late pushes out commissioning by at least that much, and often more because the downstream crews lose their window. Long-lead items ordered in Phase 2 with delivery dates confirmed in Phase 3 need to be tracked weekly through this phase, not monthly.

Common slippage points: switchgear or transformer delivery delays (the single most common cause of data centre schedule slip in 2025–2026), BMS integration failures during L3/L4 commissioning that force re-work, and the compounding effect of skilled-electrician shortages in boom markets slowing multi-trade coordination.

Phase 6: Commissioning (months 16–18)

Commissioning is its own discipline with its own sequencing, and it starts well before Phase 6 nominally begins — L1 factory witness testing on long-lead equipment typically happens during Phase 4 or early Phase 5, not in Phase 6. What happens in Phase 6 is the L2 through L5 sequence that verifies the integrated facility performs as designed.

The L1–L5 levels in brief:

  • L1 — Factory Witness Testing (FWT). Verifies equipment meets spec at the factory before shipment. Happens during manufacturing phase, not on site.
  • L2 — Site Acceptance / Pre-Installation. Verifies equipment arrived undamaged and is correctly placed.
  • L3 — Pre-Functional Testing (PFT). Each system tested individually for standalone operation.
  • L4 — Functional Performance Testing (FPT). Each system tested under operational scenarios including failure modes.
  • L5 — Integrated System Testing (IST). All systems tested together under full load including worst-case failure scenarios. This is where the whole facility stands up or doesn’t.

Depth on each level in Data Center Commissioning Timeline: The Final Eight Weeks.

Common slippage points: L4 failures requiring hardware or controls rework, L5 failures that reveal integration issues between systems supplied by different vendors, and commissioning agent availability — skilled Cx agents are as scarce as skilled electricians, and small commissioning teams stretched across multiple concurrent hyperscale builds routinely hit bottlenecks.

Critical path highlights

For anyone running the schedule, the critical path on a 2026 hyperscale build is almost never what the original bar chart suggests. The physical construction work (steel, concrete, envelope) rarely drives the critical path. What drives it:

Utility substation energisation. If the facility can’t be energised from the grid on schedule, nothing downstream matters. Utility-funded substation work is typically managed by the utility’s own contractors, with the data centre developer having limited ability to accelerate it. Sites where the utility substation is on the GC’s scope (less common but increasing) are marginally easier to schedule.

Switchgear and transformer delivery. As discussed above, these are the long-pole items for most builds. 128-week average transformer lead times in 2025 mean transformers ordered in Phase 2 may not arrive until Phase 5. Missing this window by even two weeks cascades.

Grid interconnection final sign-off. The utility’s final approval to energise is a paperwork milestone that can be surprisingly slow. Regulatory filings, final protection study approvals, and sometimes public-hearing requirements all fit in here.

Commissioning agent capacity. A new constraint in 2026 that wasn’t on most 2023 schedules. Qualified commissioning teams are booked out a year ahead in most US metros. The Cx schedule may be the binding constraint on your go-live date, not the construction schedule.

Skilled electrician availability. Especially for MV precision wiring in Phase 5 and for commissioning support in Phase 6. The ABC 2026 labour forecast notwithstanding (349,000 net new workers needed overall), electrical trades remain in acute shortage in every major data centre metro.

Where hyperscale schedules actually slip

An 18-month schedule is achievable. It is not the average. A realistic sensitivity analysis across the 2025–2026 project sample shows:

  • Target schedule (accelerated, modular, pre-secured power): 18 months construction.
  • Typical hyperscale schedule: 22–28 months construction.
  • Constrained-market schedule (grid upgrades, permitting delays): 30–36+ months.
  • Full lifecycle (site selection through go-live): 36–72 months.

The most common cause of slippage is not one big failure — it’s the compounding of 2–4 small delays across multiple phases. A two-week permit delay in Phase 2, a one-week weather delay in Phase 3, a three-week switchgear delay in Phase 5, and a one-week commissioning schedule slip in Phase 6 adds up to seven weeks on the end date. Each is individually manageable. Combined, they move the go-live date into the next quarter.

The firms that actually hit the 18-month target have three things in common: they’ve built this exact design before, they have pre-secured power and equipment before site mobilisation, and their MEP coordination cadence is weekly or twice-weekly with leadership in the room. The schedule is not a scheduling problem; it’s a supply-chain and coordination problem that happens to be visualised on a schedule.

Hyperscale vs enterprise timeline

A smaller enterprise or colocation build runs a different schedule shape. Smaller scale lets more activities run in series rather than parallel, which simplifies coordination but extends overall duration relative to floor area. Typical ranges:

Facility typeTypical construction durationTypical full lifecycle
Enterprise data centre (5–15 MW)12–18 months24–36 months
Mid-market colocation (15–50 MW)18–24 months30–48 months
Hyperscale single hall (50–150 MW)18–24 months (accelerated)36–60 months
Hyperscale campus (150MW+, multi-building)24–36+ months (rolling deliveries)48–84 months

The hyperscale acceleration is paid for in procurement complexity, vendor concentration risk, and the up-front work needed in Phase 1 to make everything else possible. Enterprise-scale builds that try to replicate hyperscale speed without the supply-chain infrastructure usually produce schedules that look aggressive on paper and slip on execution.

FAQ

Q: Is 18 months really achievable for a 100 MW data centre?

With modular electrical rooms, pre-fabricated shell components, standardised designs, pre-secured grid capacity, and all long-lead equipment already on order, yes. Without any one of those conditions, no. The typical 2026 hyperscale build runs 22–28 months. The 18-month target requires every pre-condition to be met.

Q: What’s the single largest cause of schedule slip on data centre builds in 2026?

Long-lead electrical equipment delivery, followed closely by skilled-electrician availability during MEP rough-in. Transformer lead times averaged 128 weeks into Q2 2025. Switchgear around 44 weeks. Missing a manufacturer-confirmed delivery date by even two weeks typically cascades into four to six weeks on the end date.

Q: How much does a hyperscale data centre cost?

Industry averages sit around $10–12 million per megawatt of IT load, per CBRE and Uptime Institute data. In high-cost metros or markets with severe labour shortages, that can push above $15 million per MW. A 100 MW hall is therefore a $1.0–1.5 billion project, which scales the consequences of schedule slip substantially.

Q: Can the permitting phase be accelerated?

Marginally. Pre-submittal meetings with the AHJ can reduce review iterations. Using a design that has been permitted before in the same jurisdiction speeds reviewer confidence. Paying for expedited review where that’s available (some jurisdictions offer it; many do not) shaves weeks. What doesn’t accelerate permitting is any amount of private-sector urgency — the AHJ works at their own pace.

Q: What’s the difference between L1 and L5 commissioning?

L1 is factory acceptance testing before equipment ships. L5 is full integrated system testing with all facility systems operating together under load and failure scenarios. L1 happens during Phase 4; L5 happens at the end of Phase 6. The full sequence is covered in Data Center Commissioning Timeline: The Final Eight Weeks.

Q: What scheduling software do hyperscale builders actually use?

Primavera P6 dominates for the master schedule. Procore or Autodesk Construction Cloud for the platform-level coordination. Fieldwire for field task management. Most hyperscale builds run all three concurrently — P6 is the schedule of record, Procore or Forma is the project management platform, and Fieldwire is what the foremen actually open on their tablets. See Best Construction Scheduling Software for US General Contractors 2026 for the full landscape and Primavera P6 vs MS Project for US Construction for the CPM-scheduling-engine decision.

Q: Why is the critical path usually power rather than construction?

Because the construction work has been industrially optimised over a decade of hyperscale builds, while the grid infrastructure supporting these facilities has not. Concrete pours and steel erection are predictable. Utility substation upgrades, transformer deliveries, and grid interconnection approvals are not. The scheduling discipline that matters most on data centre builds is tracking the items that aren’t under the GC’s direct control — which is most of the critical path.

Q: How do you manage the grid interconnection risk?

Early and carefully. Sites should be screened for substation headroom and queue position before land is placed under option, not after. Phased energisation strategies — aligning initial IT load with existing grid capacity, then expanding as upgrades come online — are increasingly common. Dual-substation designs where feasible reduce dependency on any single utility project. The fast-moving teams in 2026 treat interconnection as a first-order site selection criterion, not a late-stage detail.